The Pub/Sub pattern is a versatile one-way messaging pattern where a publisher generates data/messages, and a subscriber registers to receive specific types of messages. It can be implemented using a peer-to-peer architecture or a message broker to mediate communication.
The above image illustrates the Peer-to-Peer Pub/Sub model, where a Publisher sends messages directly to Subscribers without a mediator. Subscribers need to know the address or the endpoint of the Publisher to get messages.
Note: A node, in this instance, typically refers to an active participant in the messaging network, which could be either a service that publishes information or a service that receives information (a subscriber).
In the above image, the Pub/Sub model uses a message broker as a central hub to deliver messages between publishers and subscribers. The broker mediates the message exchange, distributing messages from publishers to subscribers. The subscriber nodes subscribe to the broker rather than the publisher directly.
The presence of a broker improves the decoupling between the system’s nodes since both the publisher and subscribers interact only with the broker.
In this tutorial, you will build a real-time chat application to further demonstrate this pattern.
To start the server-side implementation, we will initialize a basic Nodejs app using the command:
npm init -y
The above command creates a default package.json
file.
A package.json
file is a key component in Node.js projects. It serves as a manifest for the project, containing various metadata such as project name, version, dependencies, scripts, and more. When you add dependencies to your project using npm install
or yarn add
, the package.json
file is automatically updated to reflect the newly added dependencies.
Next, we will install the WebSocket (ws) dependency package that will be needed during the entire course of this build:
npm install ws
The server-side implementation will be a basic server-side chat app. We will follow the below workflow:
Create a file named app.js
in your directory and put the code below inside:
const http = require("http");
const server = http.createServer((req, res) => {
res.end("Hello Chat App");
});
const PORT = 3459;
server.listen(PORT, () => {
console.log(`Server up and running on port ${PORT}`);
});
The createServer
method on the built-in http
module of Node.js will be used to set up a server. The PORT
at which the server should listen to requests was set, and the listen method was called on the server instance created to listen to incoming requests on the port specified.
Run the command: node app.js
in your terminal, and you should have a response like this:
OutputServer is up and running on port 3459
If you make a request to this port on your browser, you should have something like this as your response:
Create a file called index.html
in the root directory and copy the below code:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
</head>
<body>
<p>Serving HTML file</p>
</body>
</html>
This is a basic html file that renders a Hello. Now, we have to read this file and serve it as the response whenever an HTTP request is made to our server.
const http = require("http");
const fs = require("fs");
const path = require("path");
const server = http.createServer((req, res) => {
const htmlFilePath = path.join(__dirname, "index.html");
fs.readFile(htmlFilePath, (err, data) => {
if (err) {
res.writeHead(500);
res.end("Error occured while reading file");
}
res.writeHead(200, { "Content-Type": "text/html" });
res.end(data);
});
});
Here, we use the built-in path module and the join function to concatenate path segments together. Then, the readFile
function is used to read the index.html
file asynchronously. It takes two arguments: the path of the file to be read and a callback. A 500
status code is sent to the response header, and the error message is sent back to the client.
If the data is read successfully, we send a 200
success status code to the response header and the response data back to the client, which, in this case, is the content of the file. If no encoding is specified, like UTF-8 encoding, then the raw buffer is returned. Otherwise, the HTML file is returned.
Make a request to the server on your browser, and you should have this:
const WebSocket = require("ws");
const webSocketServer = new WebSocket.Server({ server });
webSocketServer.on("connection", (client) => {
console.log("successfully connected to the client");
client.on("message", (streamMessage) => {
console.log("message", streamMessage);
distributeClientMessages(streamMessage);
});
});
const distributeClientMessages = (message) => {
for (const client of webSocketServer.clients) {
if (client.readyState === WebSocket.OPEN) {
client.send(message);
}
}
};
In the preceding line of code, we create a new WebSocket server, webSocketServer
and attach it to our existing HTTP server. This will allow us to handle both standard HTTP requests and WebSocket connections on the same port 3459
.
The on()
connection event is triggered when a successful WebSocket connection is established. The client
in the callback function is a WebSocket connection object that represents the connection to the client. It will be used to send and receive messages and listen to events like message
from the client.
The distrubuteClientMessages
function is used here to send received messages to all connected clients. It takes in a message
argument and iterates over the connected clients to our server. It then checks for the connection state of each client (readyState === WebSocket.OPEN
). This is to ensure that the server sends messages only to clients that have open connections. If the client’s connection is open, the server sends the message to that client using the client.send(message)
method.
For the client-side implementation, we will modify our index.html
file a little bit.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
</head>
<body>
<p>Pub/Sub Pattern with Chat Messaging</p>
<div id="messageContainer"></div>
<form id="messageForm">
<form id="messageForm">
<input
type="text"
id="messageText"
placeholder="Send a message"
style="
padding: 10px;
margin: 5px;
border-radius: 5px;
border: 1px solid #ccc;
outline: none;
"
onfocus="this.style.borderColor='#007bff';"
onblur="this.style.borderColor='#ccc';"
/>
<input
type="button"
value="Send Message"
style="
padding: 10px;
margin: 5px;
border-radius: 5px;
background-color: #007bff;
color: white;
border: none;
cursor: pointer;
"
onmouseover="this.style.backgroundColor='#0056b3';"
onmouseout="this.style.backgroundColor='#007bff';"
/>
</form>
</form>
<script>
const url = window.location.host;
const socket = new WebSocket(`ws://${url}`);
console.log("url", url); // localhost:3459
console.log("socket", socket); // { url: "ws://localhost:3459/", readyState: 0, bufferedAmount: 0, onopen: null, onerror: null, onclose: null, extensions: "", protocol: "", onmessage: null, binaryType: "blob" }
</script>
</body>
</html>
In this piece of code, we added a form element that has an input and a button for sending messages. WebSocket connections are initiated by clients, and to communicate with a WebSocket-enabled server that we have set up initially, we have to create an instance of the WebSocket object specifying the ws://url
that identifies the server we want to use. The URL and socket variable, when logged, will have the URL connection to the port where our server is listening on port 3459
and the WebSocket object, respectively.
So, when you type in the make a request to the server in your browser, you should see this:
Let’s upgrade our script so that we can send messages from the client to the server and receive messages from the server.
<script>
const url = window.location.host;
const socket = new WebSocket(`ws://${url}`);
const messageContainer = document.getElementById("messageContainer");
socket.onmessage = function (eventMessage) {
eventMessage.data.text().then((text) => {
const messageContent = document.createElement("p");
messageContent.innerHTML = text;
document.getElementById("messageContainer").appendChild(messageContent);
});
};
const form = document.getElementById("messageForm");
form.addEventListener("submit", (event) => {
event.preventDefault();
const message = document.getElementById("messageText").value;
socket.send(message);
document.getElementById("messageText").value = "";
});
</script>
As previously mentioned, we retrieve the URL that sends a request to our server from the client side (browser) and create a new WebSocket object instance with the URL. Then, we create an event on the form element when the Send Message
button is clicked. The text entered by the user on the user interface is extracted from the input element, and the send method is called on the socket instance to send a message to the server.
Note: In order to send a message to the server on the WebSocket connection, the send()
method of the WebSocket object is usually invoked, and it expects a single message argument, which can be an ArrayBuffer
, Blob
, string
or typed array
. This method buffers the specified message to be transmitted and returns it before sending the message to the server.
The onmessage
event called on the socket object is triggered when a message is received from the server. This is used to update the user interface of an incoming message. The eventMessage
param in the callback function has the data(the message) sent from the server, but it comes back as a Blob
. The text()
method is then used on the Blob data, which returns a promise and is resolved using the then()
to get the actual text from the server.
Let’s test what we have. Start up the server by running
node app.js
Then, open two different browser tabs, open http://localhost:3459/
, and try sending messages between the tabs to test:
Let’s say our application starts growing, and we try to scale it by having multiple instances of our chat server. What we want to acheive is that two different users connected to two different servers should be able to send text messages to each other successfully. Currently, we have only one server, and if we request another server, say http://localhost:3460/
, we will not have the messages for the server on port 3459
; i.e. only users connected to 3460
can chat with themselves. The current implementation works in a way that when a chat message is sent on our working server instance, the message is distributed locally to only the clients connected to that particular server, as shown when we open http://localhost:3459/
on two different browsers. Now, let’s see how we can have two different servers integrate them so they can talk to each other
Redis is a fast and flexible in-memory data structure store. It is often used as a database or a cache server to cache data. Additionally, it can be used to implement a centralized Pub/Sub message exchange pattern. Redis’s speed and flexibility have made it a very popular choice for sharing data in a distributed system.
The aim here is to integrate our chat servers using Redis as a message broker. Each server instance publishes any message received from the client (browser) to the message broker at the same time. The message broker subscribes to any message coming from the server instances.
Let’s modify our app.js
file:
//app.js
const http = require("http");
const fs = require("fs");
const path = require("path");
const WebSocket = require("ws");
const Redis = require("ioredis");
const redisPublisher = new Redis();
const redisSubscriber = new Redis();
const server = http.createServer((req, res) => {
const htmlFilePath = path.join(__dirname, "index.html");
fs.readFile(htmlFilePath, (err, data) => {
if (err) {
res.writeHead(500);
res.end("Error occured while reading file");
}
res.writeHead(200, { "Content-Type": "text/html" });
res.end(data);
});
});
const webSocketServer = new WebSocket.Server({ server });
webSocketServer.on("connection", (client) => {
console.log("succesfully connected to the client");
client.on("message", (streamMessage) => {
redisPublisher.publish("chat_messages", streamMessage);
});
});
redisSubscriber.subscribe("chat_messages");
console.log("sub", redisSubscriber.subscribe("messages"));
redisSubscriber.on("message", (channel, message) => {
console.log("redis", channel, message);
for (const client of webSocketServer.clients) {
if (client.readyState === WebSocket.OPEN) {
client.send(message);
}
}
});
const PORT = process.argv[2] || 3459;
server.listen(PORT, () => {
console.log(`Server up and running on port ${PORT}`);
});
Here, we are taking advantage of Redis’s publish/subscribe capabilities. Two different connection instancesnwas instantiated, once for publishing messages and the other to subscribe to a channel. When a message is sent from the client, we publish it to a Redis channel named “chat_messages” using the publisher
method on the redisPublisher
instance. The subscribe
method is called on the redisSubscribe
instance to subscribe to the same chat_message
channel. Whenever a message is published to this channel, the redisSubscriber.on
event listener is triggered. This event listener iterates over all currently connected WebSocket clients and sends the received message to each client. This is to ensure that when one user sends a message, all other users connected to any server instance receive that message in real time.
If you start two different servers, say:
node app.js 3459
node app.js 3460
When chat text is sent on one instance, we can now broadcast the messages across our connected servers rather than to only one particular server. You can test this by running http://localhost:3459/
and http://localhost:3460/
, then sending chats between them and seeing that the messages are broadcast across the two servers in real-time.
You can monitor the messages published to a channel from the redis-cli
and also subscribe to the channel to get the subscribed messages:
Run the command redis-cli
. Then enter MONITOR
. Go back to your browser and start a chat. In your terminal, you should see something like this, assuming you send a chat text of Wow:
To see subscribed messages published, run the same command redis-cli
and enter SUBSCRIBE channelName
. channelName
in our case will be chat_messages. You should have something like this in your terminal if you send a text: Great from the browser:
Now, we can have multiple instances of our server running on different ports or even different machines, and as long as they subscribe to the same Redis channel, they can receive and broadcast messages to all connected clients, ensuring users can chat seamlessly across instances.
Remember we discussed the Pub/Sub pattern implementation using a Message broker in the introduction section. This example perfectly sums it up.
In the figure above, there are two different clients connected to chat servers. The chat servers are interconnected, not directly, but through a Redis instance. This means that while they handle client connections independently, they share information (chat messages) through a common medium (Redis). Each chat server up there connects to Redis. This connection is used to publish messages to Redis and subscribe to Redis channels to receive messages. When a user sends a message, the chat server publishes it to the specified channel on Redis.
When Redis receives a published message, it broadcasts this message to all subscribed chat servers. Each chat server then relays the message to all connected clients, ensuring that every user receives the messages sent by any user, regardless of which server they’re connected to.
This architecture allows us to horizontally scale our chat application by adding more server instances as needed. Each instance can handle its own set of connected clients, thanks to Redis’s publish/subscribe-system capabilities, which ensure consistent message distribution across all instances. This setup is efficient for handling large numbers of simultaneous users and ensures the high availability of your application.
In this tutorial, we have learnt about the Publish/Subscribe pattern while creating a simple chat application to demonstrate this pattern, using Redis as a message broker. Up next is to learn how to implement a peer-to-peer messaging system in cases where a message broker might not be the best solution, for example, in complex distributed systems where a single point of failure (Broker) is not an option.
You will find the complete source code of this tutorial here on GitHub.
The author selected No Kid Hungry to receive a donation as part of the Write for DOnations program.
]]>mod_proxy
and mod_proxy_http
modules are enabled, and the proxy configuration directives ProxyPass
and ProxyPassReverse
are pointing to http://localhost:3000/search
. The Node.js application runs well independently and listens on port 3000, as confirmed by netstat
. However, when accessed via Apache, it results in a 502 error. The Apache error log indicates “Connection refused” errors but no further specifics are given. I’ve ensured that there are no firewall issues blocking the connection between Apache and Node.js. Any insights or suggestions to resolve this would be greatly appreciated.
]]>Specifically this:
app.get("/", (req, res) => {
res.redirect('/assets/index.html');
when I hit the page in a browser with / it redirect to:
https://mydomain.com/assets/index.html
(Which is the correct path)
But I get the error: “Cannot GET /assets/index.html” in the browser. I can’t seem to manually traverse to any assets honestly beyond app.js which is in the root.
The structure of my folder structure (which I’m open to ideas) is:
-Apps
My Package.json is:
{
"name": "Test",
"version": "1.0.1",
"description": "The Website with submit form",
"main": "app/app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node app/app.js"
},
"author": "ME",
"license": "ISC",
"dependencies": {
"@sendgrid/mail": "^8.1.1",
"body-parser": "^1.20.2",
"cors": "^2.8.5",
"dotenv": "^16.4.5",
"express": "^4.18.3"
}
}
app.js is below. The HTML file really doesn’t matter. Its index.htmkl
const express = _require_('express');
const _bodyParser_ = _require_('body-parser');
const app = express();
const cors = _require_('cors');
const _sgMail_ = _require_('@sendgrid/mail');
//const path = require("path");
_require_('dotenv').config();
_console_.log(`Hello ${_process_.env.SENDGRID_API_KEY}`);
app.use(_bodyParser_._json_());
app.use(_bodyParser_._urlencoded_({extended: true}));
app.use('/assets', express._static_(_process_.cwd() + '/assets'));
app.use(cors());
app.get("/", (req, res) => {
res.redirect('/assets/index.html');
})
app.post('/formHandler', (req, res) => {
_console_.log('This the post function!!!! ', req.url);
_sgMail_.setApiKey(_process_.env.SENDGRID_API_KEY);
const msg = {
to: `${_process_.env.SENDGRID_TO}`,
from: `${_process_.env.SENDGRID_FROM}`,
replyTo: `${req.body.email}`,
subject: `${req.body.subject}`,
text: `${req.body.content}`,
};
_console_.log(msg);
if (req.body.email === undefined && req.body.subject === undefined && req.body.text === undefined) {
_console_.log('found bad packet');
} else {
_sgMail
_ .send(msg)
.then(() => {
_console_.log('Email sent');
//res.statusCode(200);
res.json({success: true})
res.send();
})
.catch((error) => {
//res.statusCode(400);
res.json({success: false, error: 'Bad'});
res.send();
_console_.error(error);
})
}
});
const PORT = _process_.env.PORT || 8080;
app.listen(PORT, () => {
_console_.log(`Server running on port ${PORT}`);
});
]]>Dependency Injection (DI) means to inject dependency. It refers to a pattern where the dependencies of a component are provided as input by an external entity which is usually referred to as the injector. NestJS uses the DI as a core feature on which its architecture was built. DI allows the creation of dependent objects outside of a class provider
and provides these created objects to the class that needs them (as the dependency).
To follow this guide, you need the following:
classes
are like blueprint/template for creating objects in JavaScript. Classes
create objects using the new
operator. When you define a class and use the new
operator on it, you are creating an instance of that class. Each instance is an object instantiated based on the structure of the class defined. The objects created have properties which can be the data or method props added by the class.
class Greeting {
sayGoodMorning() {
return "Hello, Good Morning";
}
}
const morningGreeting = new Greeting().sayGoodMorning();
console.log(morningGreeting);
When you run this piece of code, you will notice the following output:
Hello, Good Morning
Here, the Greeting
class was defined using the class
keyword with a sayGoodMorning
method that returns a string when called. When you create an instance of the Greeting
class, it returns an object that is created based on the Greeting
class structure defined earlier. Now, the morningGreeting
object created has access to the encapsulated method sayGoodMorning
defined in the class structure. That is why you can do Greeting().sayGoodMorning()
.
Another important method is the constructor
that is used for creating and initializing an object created with a class.
Note: There can only be one constructor method.
class Greeting {
constructor(message) {
this.message = message;
}
sayGoodMorning() {
return this.message;
}
}
const morningGreeting = new Greeting(
"Hello, Good Morning to you."
).sayGoodMorning();
console.log(morningGreeting);
In this example, a constructor method is introduced which creates and initializes the instance of the Greeting
class. In the first example, an instance of the Greeting
class was created even when the constructor method was not defined because, by default, JavaScript provides a constructor if none is specified. Also, this instance encapsulates specific data message
because the constructor method was defined to accept a parameter and set the instance of the message property, unlike the first example where the instance did not encapsulate any specific data as no property was set within the object. The second example with a constructor allows you to return a flexible and dynamic message, unlike the first example that will always return a static Hello Good Morning.
const morningGreeting = new Greeting(
"Hello, Good Morning to you."
).sayGoodMorning();
const morningGreeting2 = new Greeting(
"Hi, Good Morning to you."
).sayGoodMorning();
If you log these two object instances, you will have these in your console:
Hello, Good Morning to you.
Hi, Good Morning to you.
For better understanding, look at a real-world scenario of a blog system that does not use DI. Say there is a file called BlogService
class that handles CURD operation for blog posts. This class will depend on a Database service (DatabaseService
) class for it to be able to interact with the database.
class DatabaseService {
constructor() {
this.connectionString = "mongodb://localhost:27017";
}
connect() {
console.log(`Database Connection initiated${this.connectionString}`);
}
createPost(post) {
console.log("Post created");
return `Creating ${post.title} to the database`;
}
getAllPosts() {
console.log("All posts returned");
return [];
}
}
class BlogService {
constructor() {
this.databaseService = new DatabaseService();
}
createPost(title, content) {
return this.databaseService.createPost({ title, content });
}
getAllPosts() {
return this.databaseService.getAllPosts();
}
}
const blogService = new BlogService();
const createdPost = blogService.createPost("DI NestJS", "Hello World");
const posts = blogService.getAllPosts();
console.log("blogService", blogService);
console.log("createdPost", createdPost);
In this code, there are two main actions to note. First, the BlogService
is tightly coupled with the DatabaseService
which means the BlogService
will not be able to work without the DatabaseService
. And second, the BlogService
directly creates an instance of the DatabaseService
, which restricts the BlogService
to a single database module.
Now, this is how the previous code can be reformed using DI in NestJS:
import { Injectable } from '@nestjs/common';
@Injectable()
class DatabaseService {
constructor() {
this.connectionString = 'db connection string';
}
createPost(post) {
console.log(`Creating post "${post.title}" to the database.`);
}
getAllPosts() {
console.log("Fetching all posts from the database.");
return [{ title: 'My first post', content: 'Hello world!' }];
}
}
@Injectable()
class BlogService {
constructor(private databaseService: DatabaseService) {}
createPost(title, content) {
const post = { title, content };
this.databaseService.save(post);
}
listPosts() {
return this.databaseService.getAllPosts();
}
}
In NestJS, you don’t manually instantiate classes with new
keyword, NestJS framework’s DI container does it for you under the hood. In this piece of code, you use the @Injectable()
decorator to declare the DatabaseService
and BlogService
indicating that they are both providers that NestJS can inject.
Nest architecture is built around strong design patterns known as dependency injection. Dependency Injection (DI) is a pattern that NestJS uses to achieve IoC. DI allows the creation of dependent objects outside of a class and provides those objects to another class that depends on it through injection at runtime rather than the dependent class creating it. The benefit of this is that it creates a more modular and maintainable code.
Based on the previous code sample, DatabaseService
is a dependency of BlogService
. With DI in NestJS, you can create an instance of the DatabaseService
object outside of the BlogService
and provide it to the BlogService
through constructor injection rather than instantiating the DatabaseService
directly in the BlogService
class.
IoC is a technique used for inverting the control flow of a program. Instead of the app controlling objects’ flow and creation, NestJS controls inversion. The NestJS IoC container manages the instantiation and injection of dependencies, where it creates a loosely coupled architecture by managing the dependencies between objects.
In short, IoC inverts the flow of control for the program design. Instead of your code calling and managing every dependency, you outsource the control to a container or a framework, to allow your application to be more modular and flexible to changes due to it being loosely coupled.
Ensure you have Node installed on your machine. Also, you will need to globally install the Nest CLI using the command:
npm i -g @nestjs/cli
Create a new nest project using the Nest CLI:
nest new nest-di
Navigate to your project directory:
cd nest-di
By default, you have an AppModule
that has AppService
as the provider and AppController
as the controller.
Generate an additional resource called players
using the command:
nest g resource players
This will set up the players
resource by generating boilerplate code for a CRUD resource. It creates the players.module
, players.controller
, and player.service
files by default.
import { Controller, Get } from '@nestjs/common';
import { PlayersService } from './players.service';
@Controller('players')
export class PlayersController {
constructor(private readonly playersService: PlayersService) {}
@Get()
getPlayers(): string {
return this.playersService.getPlayers();
}
}
import { Injectable } from '@nestjs/common';
@Injectable()
export class PlayersService {
private readonly players = [
{ id: 1, name: 'Lionel Messi' },
{ id: 2, name: 'Christiano Ronaldo' },
];
getPlayers(): any {
return this.players;
}
}
In the preceding sets of code, you can see that the PlayerController
depends on the PlayersService
class to complete the operation of getting the list of players. This means that PlayersService
is a dependency of PlayerController
. In the players.module
file, the PlayersService
is listed in the providers array. NestJS treats providers as classes that can be instantiated and shared across the app. Here, by listing PlayersService
as a provider, NestJS creates an instance of PlayersService
that can be injected into other components (in this case, PlayerController
).
import { Module } from "@nestjs/common";
import { PlayersService } from "./players.service";
import { PlayersController } from "./players.controller";
@Module({
controllers: [PlayersController],
providers: [PlayersService],
})
export class PlayersModule {}
PlayersController
is listed in the controllers array inside the players.module.ts file. NestJS also creates an instance of this controller when the PlayerModule
is loaded. As previously mentioned, the PlayersController
depends on the PlayersService
as specified in the constructor parameter:
constructor(private readonly playersService: PlayersService) {}
When NestJS instantiates the PlayerController
, it sees the constructor parameter and immediately understands that it depends on PlayersService
. NestJS then looks for the PlayersService
within the PlayersModule
and resolves this dependency by creating an instance of PlayersService
and injecting it into the PlayersController
instance.
Typically, NestJS instantiates PlayersService
first since it is a dependency of PlayersController
. Once it is instantiated, NestJS keeps the instance of PlayersService
in the application’s dependency injection container. This container manages the instances of all the classes that NestJS creates and is key to Nest’s DI system.
When the PlayersService
instance is ready, Nest instantiates the PlayersController
and injects the PlayersService
instance into its constructor. This injection will allow PlayersController
to use PlayersService
so it can handle the HTTP requests to fetch the players list when /players
is called. Request that route and monitor the response:
The AppModule
imports the PlayersModule
, so when the application starts, NestJS loads and processes PlayersModule
, analyzes its imports, controllers, providers, and exports to understand how they should be instantiated and relate to one another.
@Module({
imports: [PlayersModule],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
When you request to /players
endpoint, Nest routes the request to getPlayers()
method of the PlayersController
. The controller in turn calls the getPlayers()
method on its dependency PlayersService
instance to get the list of players and return the data as the response.
Note: The import
array takes in the imported module list.
The Nest framework uses TypeScript decorators extensively to improve its modularity and code maintainability. In TypeScript, decorators provide a way to programmatically tap into the process of defining a class. As previously explained, when an instance of a class is created, the classes’ properties and methods become available on the class instance. Decorators then allow you to inject code into the actual definition of a class even before the class instance is created.
Here’s an example:
function exampleDecorator(constructor: Function) {
console.log("exampleDecorator invoked");
}
@exampleDecorator
class ClassWithExampleDecorator {}
In this code, a function exampleDecorator
takes in a single parameter constructor
, and logs a message to the console to indicate that it has been invoked. The exampleDecorator
function is then used as a class decorator function. The ClassWithExampleDecorator
is annotated with the exampleDecorator
using the @
symbol followed by the decorator’s name. When you run this piece of code you have this result:
exampleDecorator invoked
Here you can see that even without creating an instance of the ClassWithExampleDecorator
, the exampleDecorator
function has been invoked. Ideally, you would set up and run this simple code example by creating a basic node project with TypeScript support. You can learn more about decorators in TypeScript.
In NestJS, Decorators are used to annotate and modify classes during design time. They also define metadata that NestJS uses to organize the application structure and dependencies. Take a look at the core decorators from NestJS:
The @Injectable()
decorator is used in NestJS to mark a class as a provider that can be managed by the NestJS DI system. It tells NestJS that this particular class is a dependency and is available to be injected by the class that uses it.
When you annotate a class in NestJS with the @Injectable()
decorator, you are telling NestJS that the particular class should be available to be instantiated and injected as a dependency where it is needed. The NestJS IoC container manages all the classes in a NestJS app that are marked with @Injectable()
. When an instance of that class is needed, NestJS looks to the IoC container and resolves any dependencies that the class might have and instantiates the class, if it has not been instantiated yet, and then provides the instantiated class where it is required.
Take a look at this code:
@Injectable()
export class AppService {
getHello(): string {
return "Hello";
}
}
Here, the AppService
file is marked with the @Injectable()
decorator which makes it available for injection by the NestJS DI system. NestJS uses a library called reflect-metadata
library to define the metadata for the AppService
class (decorated by the @Injectable()
) so that it can be managed by the NestJS DI system.
This metadata includes the information about the class, its constructor params (dependencies), and methods that the DI system uses to resolve and inject required dependencies at runtime.
In the example above, the metadata will have information about the AppService
class, and its method. Assuming AppService
has a dependency passed to its constructor method, the metadata will also include this information.
The @Module()
decorator provides metadata that NestJS uses to organize the application structure. @Module()
decorator takes an object that can have properties like imports, controllers, providers, and exports.
NestJS uses the IoC container for dependency injection and managing dependencies. The providers are registered within the container and injected into their dependents as needed. For example, AppService
will be registered and injected into the AppController
that needs it.
Here’s an example to understand the @Module()
decorator properly:
@Module({
imports: [PlayersModule],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
As shown here, this is the root module that is specified in the main.ts
file when you bootstrap the NestJS app. The root module has an imported module called PlayerModule
, a controller AppController
, and a provider AppService
.
When you start up your app, Nest looks at the import prop of the AppModule
to know other modules that need to be loaded. In this case, PlayersModule
is imported so NestJS will load and configure the PlayersModule
. Say PlayersModule
also has imported modules that are passed in PlayersModule
imports
array, NestJS will recursively load these modules as well so that all modules and their dependencies are loaded and configured. Once all the modules have been loaded, NestJS instantiates the providers that have been specified in the providers properties for each module. This means that the AppService
and the PlayersService
will be instantiated and added to the IoC container. Next, NestJS handles the dependency injection by injecting providers into the controllers and services using the constructor injection.
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
}
@Injectable()
export class AppService {
getHello(): string {
return "Hello";
}
}
As mentioned earlier, NestJS injects the AppService
which is a provider into the AppController
using the constructor injection. The same will happen between PlayersService
and PlayersControllers
.
Once Dependencies have been injected, Nest initializes controllers specified in the controller’s property of each module to handle incoming requests and returning responses.
The @Controller()
decorator in NestJS is used to define and organize the routes and requests handling logic in your app. Controllers help to separate handling HTTP requests from the business logic of the application which makes the codebase more modular and maintainable.
When you decorate a class with @Controller()
, you are providing metadata to NestJS which indicates that the class serves as a controller. Nest, in turn, will inspect the methods within the controllers and look for HTTP method decorators like @Get()
, @Post()
etc. NestJS creates an internal routing table based on the decorators applied to the controller methods. This routing table maps incoming requests to the appropriate controller methods based on the requested route and HTTP method. For example, based on your current codebase, if you make a GET request to localhost:3000
, the routing table maps your incoming GET request to the appropriate controller which in this case is AppController
. It looks up the controller and looks for the @Get()
decorator, processes the request, interacts with its dependency appService
, and returns a response.
Note: The imports
field used in the module’s metadata is for importing internal and external modules. When you import a module, you are importing the module’s context which includes its providers, controllers, and exported entities. This allows you to compose your application to be modular and maintainable.
Here is how you can log a module’s metadata:
async function bootstrap() {
const app = await NestFactory.create(AppModule);
//log metadata
const metadata = Reflect.getMetadataKeys(AppModule);
console.log(metadata);
await app.listen(3000);
}
bootstrap();
In this code, the Reflect
object, which provides reflection capabilities for inspecting metadata, is used to get the metadata keys associated with the AppModule
using the getMetadataKeys
method.
The resulting log returns an array of the module decorator key values:
[ 'imports', 'controllers', 'providers' ]
Other methods can be called on the Reflect
object like: getMetadata
, getOwnMetadata
, getOwnMetadataKeys
.
Inside the imports
property of the module:
async function bootstrap() {
const app = await NestFactory.create(AppModule);
//log metadata
const metadata = Reflect.getMetadataKeys(AppModule);
console.log(metadata);
for (const key of metadata) {
if (key === "imports") {
const imports = Reflect.getMetadata(key, AppModule);
console.log("Imports", imports);
}
}
await app.listen(3000);
}
Here, you get the metadata keys associated with the AppModule
, then iterate over these keys and check if any of the metadata inside the array equals imports. If a key named imports
is found, you then use the getMetadata
function to get the array of the imported modules. You can do the same for controllers, providers, and exported entities by changing the value of the key.
Take a look at this code:
import { Module } from "@nestjs/common";
import { PlayersService } from "./players.service";
import { PlayersController } from "./players.controller";
@Module({
controllers: [PlayersController],
providers: [],
})
export class PlayersModule {}
When you run this code, you will notice the following error:
This error is common for developers new to NestJS. This error states it can’t resolve the dependency of PlayersControllers
. The dependency in this instance is the PlayersService
injected through constructor injection. To resolve this error, check if the PlayersModule
is a valid NestJS module. Next, since PlayersService
is a provider, check if it is being listed as part of the providers in the PlayersModule
.
The third option is to check if a third-party module is part of the imported module within the AppModule
.
In this tutorial, you have learned the basics of Dependency Injection, Inversion of Control and how they apply to the context of NestJS. You also learned what Decorators are and what they mean whenever they are used to decorate a class.
You will find the complete source code of this tutorial here on GitHub.
]]>DOCN rebooted by droplet and now I am getting a 502 Bad Gateway error (nginx/1.18.0 (Ubuntu)). Please could someone assist what I need to do. I tried running “pm2 restart app.js” in console but getting [PM2][ERROR] Process or Namespace app.js not found.
This is really frustrating as my service is down because of Digital Ocean issue - have raised a support ticket.
]]>For a few days now I have been trying to set up a droplet with a mysql Database on a managed DB cluster (also on DigitalOcean), but have been running into trouble getting the app connected to the database: I keep getting the connection timed out error, which the docs refer to as a firewall (db-side) issue.
/home/nodejs/.pm2/logs/app-error.log last 15 lines:
0|app | at process.processTimers (node:internal/timers:507:7) {
0|app | errorno: 'ETIMEDOUT',
0|app | code: 'ETIMEDOUT',
0|app | syscall: 'connect',
0|app | fatal: true
0|app | }
0|app | Error: connect ETIMEDOUT
0|app | at PoolConnection._handleTimeoutError (/var/www/html/project-root/node_modules/mysql2/lib/connection.js:205:17)
0|app | at listOnTimeout (node:internal/timers:564:17)
0|app | at process.processTimers (node:internal/timers:507:7) {
0|app | errorno: 'ETIMEDOUT',
0|app | code: 'ETIMEDOUT',
0|app | syscall: 'connect',
0|app | fatal: true
0|app | }
The set-up is as follows:
hostname -i
inside the droplets ssh consoleThis is the setup for the dbPool connect:
const mysql = require('mysql2');
const fs = require('fs');
const path = require('path');
const caCertPath = path.join(__dirname, '../../config/ca-certificate.crt');
const caCert = fs.readFileSync(caCertPath);
const dbPool = mysql.createPool({
connectionLimit: 100,
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE,
multipleStatements: true,
ssl: {
ca: caCert
}
});
module.exports = dbPool;
The ca certificate is in /project-root/config/ca-certificate.crt while the dbConnect.js is inside /project-root/backend/utils/dbConnect.js
And inside the app.js:
**// Import module**
const dbPool = require('./backend/utils/dbConnect');
**// Connect db**
dbPool.getConnection((err, connection) => {
console.log('err connent', err);
if (err) throw (err); **// If not connected**
console.log('Connected to mySQL as ID ' + connection.threadId);
});
And the .env file:
DB_HOST='db-mysql-****-*****-do-user-********-0.c.db.ondigitalocean.com'
DB_USER='username'
DB_PASSWORD='password'
DB_DATABASE='dbname'
DB_PORT='25060'
I also tried various ports for the DB Port, like 25061 and 3306. When I use localhost or 127.0.0.1 or variations of that, I get a handshake error issue:
0|app | Listening on port 3000
0|app | err connent Error: self-signed certificate in certificate chain
0|app | at TLSSocket.onConnectSecure (node:_tls_wrap:1538:34)
0|app | at TLSSocket.emit (node:events:513:28)
0|app | at TLSSocket._finishInit (node:_tls_wrap:952:8)
0|app | at ssl.onhandshakedone (node:_tls_wrap:733:12) {
0|app | code: 'HANDSHAKE_SSL_ERROR',
0|app | fatal: true
0|app | }
I also tried setting up a managed app with node.js and a deployment process with the github repository with a DB added to the App during the app creation process in the digital ocean backend and tried the existing managed DB cluster database and run into 1:1 the same issues.
Could someone point me into the right directions to solve this issue?
Thank you very much, Julian
]]>It is a simple npm-based setup. We have a client library for our backend, which is privately stored in our GitHub organization. I modified our .npmrc file in our frontend to look as:
@<our organization>:registry=https://npm.pkg.github.com
//npm.pkg.github.com/:_authToken=${GITHUB_TOKEN}
I then created an environment variable via Terraform for the static site:
env {
key = "GITHUB_TOKEN"
value = "var.github_token"
scope = "BUILD_TIME"
type = "SECRET"
}
Now, when I trigger a deployment, I get the error:
npm ERR! code E401
npm ERR! 401 Unauthorized - GET <redacted> - authentication token not provided
Everything worked fine when I tested this locally with npm ci
after setting the environment variable.
Earlier in the build logs, it even states that it uses the GITHUB_TOKEN environment variable, I can only imagine that the node.js buildpack does not use it.
Any hint would be appreciated!
]]>The Docker platform allows developers to package and run applications as containers. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.
When building and scaling an application with Docker, the starting point is typically creating an image for your application, which you can then run in a container. The image includes your application code, libraries, configuration files, environment variables, and runtime. Using an image ensures that the environment in your container is standardized and contains only what is necessary to build and run your application.
In this tutorial, you will create an application image for a static website that uses the Express framework and Bootstrap. You will then build a container using that image and push it to Docker Hub for future use. Finally, you will pull the stored image from your Docker Hub repository and build another container, demonstrating how you can recreate and scale your application.
If you are using Ubuntu version 16.04 or below, we recommend you upgrade to a more latest version since Ubuntu no longer provides support for these versions. This collection of guides will help you in upgrading your Ubuntu version.
To follow this tutorial, you will need:
A server running Ubuntu, along with a non-root user with sudo
privileges and an active firewall. For guidance on how to set these up, please choose your distribution from this list and follow our Initial Server Setup Guide.
Docker installed on your server, following Steps 1 and 2 of “How To Install and Use Docker on Ubuntu” 22.04 / 20.04 / 18.04.
Node.js and npm installed, following these instructions on installing with the PPA managed by NodeSource on Ubuntu 22.04 / 20.04 / 18.04.
A Docker Hub account. For an overview of how to set this up, refer to this introduction on getting started with Docker Hub.
To create your image, you will first need to make your application files, which you can then copy to your container. These files will include your application’s static content, code, and dependencies.
First, create a directory for your project in your non-root user’s home directory. We will call ours node_project
, but you should feel free to replace this with something else:
- mkdir node_project
Navigate to this directory:
- cd node_project
This will be the root directory of the project.
Next, create a package.json
file with your project’s dependencies and other identifying information. Open the file with nano
or your favorite editor:
- nano package.json
Add the following information about the project, including its name, author, license, entrypoint, and dependencies. Be sure to replace the author information with your own name and contact details:
{
"name": "nodejs-image-demo",
"version": "1.0.0",
"description": "nodejs image demo",
"author": "Sammy the Shark <sammy@example.com>",
"license": "MIT",
"main": "app.js",
"keywords": [
"nodejs",
"bootstrap",
"express"
],
"dependencies": {
"express": "^4.16.4"
}
}
This file includes the project name, author, and license under which it is being shared. Npm recommends making your project name short and descriptive, and avoiding duplicates in the npm registry. We’ve listed the MIT license in the license field, permitting the free use and distribution of the application code.
Additionally, the file specifies:
"main"
: The entrypoint for the application, app.js
. You will create this file next."dependencies"
: The project dependencies — in this case, Express 4.16.4 or above.Though this file does not list a repository, you can add one by following these guidelines on adding a repository to your package.json
file. This is a good addition if you are versioning your application.
Save and close the file when you’ve finished making changes.
To install your project’s dependencies, run the following command:
- npm install
This will install the packages you’ve listed in your package.json
file in your project directory.
We can now move on to building the application files.
We will create a website that offers users information about sharks. Our application will have a main entrypoint, app.js
, and a views
directory that will include the project’s static assets. The landing page, index.html
, will offer users some preliminary information and a link to a page with more detailed shark information, sharks.html
. In the views
directory, we will create both the landing page and sharks.html
.
First, open app.js
in the main project directory to define the project’s routes:
- nano app.js
The first part of the file will create the Express application and Router objects, and define the base directory and port as constants:
const express = require('express');
const app = express();
const router = express.Router();
const path = __dirname + '/views/';
const port = 8080;
The require
function loads the express
module, which we then use to create the app
and router
objects. The router
object will perform the routing function of the application, and as we define HTTP method routes we will add them to this object to define how our application will handle requests.
This section of the file also sets a couple of constants, path
and port
:
path
: Defines the base directory, which will be the views
subdirectory within the current project directory.port
: Tells the app to listen on and bind to port 8080
.Next, set the routes for the application using the router
object:
...
router.use(function (req,res,next) {
console.log('/' + req.method);
next();
});
router.get('/', function(req,res){
res.sendFile(path + 'index.html');
});
router.get('/sharks', function(req,res){
res.sendFile(path + 'sharks.html');
});
The router.use
function loads a middleware function that will log the router’s requests and pass them on to the application’s routes. These are defined in the subsequent functions, which specify that a GET request to the base project URL should return the index.html
page, while a GET request to the /sharks
route should return sharks.html
.
Finally, mount the router
middleware and the application’s static assets and tell the app to listen on port 8080
:
...
app.use(express.static(path));
app.use('/', router);
app.listen(port, function () {
console.log('Example app listening on port 8080!')
})
The finished app.js
file will look like this:
const express = require('express');
const app = express();
const router = express.Router();
const path = __dirname + '/views/';
const port = 8080;
router.use(function (req,res,next) {
console.log('/' + req.method);
next();
});
router.get('/', function(req,res){
res.sendFile(path + 'index.html');
});
router.get('/sharks', function(req,res){
res.sendFile(path + 'sharks.html');
});
app.use(express.static(path));
app.use('/', router);
app.listen(port, function () {
console.log('Example app listening on port 8080!')
})
Save and close the file when you are finished.
Next, let’s add some static content to the application. Start by creating the views
directory:
- mkdir views
Open the landing page file, index.html
:
- nano views/index.html
Add the following code to the file, which will import Boostrap and create a jumbotron component with a link to the more detailed sharks.html
info page:
<!DOCTYPE html>
<html lang="en">
<head>
<title>About Sharks</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
<link href="css/styles.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css">
</head>
<body>
<nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md">
<div class="container">
<button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>
</button> <a class="navbar-brand" href="#">Everything Sharks</a>
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav mr-auto">
<li class="active nav-item"><a href="/" class="nav-link">Home</a>
</li>
<li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a>
</li>
</ul>
</div>
</div>
</nav>
<div class="jumbotron">
<div class="container">
<h1>Want to Learn About Sharks?</h1>
<p>Are you ready to learn about sharks?</p>
<br>
<p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info</a>
</p>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-6">
<h3>Not all sharks are alike</h3>
<p>Though some are dangerous, sharks generally do not attack humans. Out of the 500 species known to researchers, only 30 have been known to attack humans.
</p>
</div>
<div class="col-lg-6">
<h3>Sharks are ancient</h3>
<p>There is evidence to suggest that sharks lived up to 400 million years ago.
</p>
</div>
</div>
</div>
</body>
</html>
The top-level navbar here allows users to toggle between the Home and Sharks pages. In the navbar-nav
subcomponent, we are using Bootstrap’s active
class to indicate the current page to the user. We’ve also specified the routes to our static pages, which match the routes we defined in app.js
:
...
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav mr-auto">
<li class="active nav-item"><a href="/" class="nav-link">Home</a>
</li>
<li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a>
</li>
</ul>
</div>
...
Additionally, we’ve created a link to our shark information page in our jumbotron’s button:
...
<div class="jumbotron">
<div class="container">
<h1>Want to Learn About Sharks?</h1>
<p>Are you ready to learn about sharks?</p>
<br>
<p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info</a>
</p>
</div>
</div>
...
There is also a link to a custom style sheet in the header:
...
<link href="css/styles.css" rel="stylesheet">
...
We will create this style sheet at the end of this step.
Save and close the file when you are finished.
With the application landing page in place, we can create our shark information page, sharks.html
, which will offer interested users more information about sharks.
Open the file:
- nano views/sharks.html
Add the following code, which imports Bootstrap and the custom style sheet and offers users detailed information about certain sharks:
<!DOCTYPE html>
<html lang="en">
<head>
<title>About Sharks</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
<link href="css/styles.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css">
</head>
<nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md">
<div class="container">
<button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>
</button> <a class="navbar-brand" href="/">Everything Sharks</a>
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav mr-auto">
<li class="nav-item"><a href="/" class="nav-link">Home</a>
</li>
<li class="active nav-item"><a href="/sharks" class="nav-link">Sharks</a>
</li>
</ul>
</div>
</div>
</nav>
<div class="jumbotron text-center">
<h1>Shark Info</h1>
</div>
<div class="container">
<div class="row">
<div class="col-lg-6">
<p>
<div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
</div>
<img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
</p>
</div>
<div class="col-lg-6">
<p>
<div class="caption">Other sharks are known to be friendly and welcoming!</div>
<img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
</p>
</div>
</div>
</div>
</html>
Note that in this file, we again use the active
class to indicate the current page.
Save and close the file when you are finished.
Finally, create the custom CSS style sheet that you’ve linked to in index.html
and sharks.html
by first creating a css
folder in the views
directory:
- mkdir views/css
Open the style sheet:
- nano views/css/styles.css
Add the following code, which will set the desired color and font for our pages:
.navbar {
margin-bottom: 0;
}
body {
background: #020A1B;
color: #ffffff;
font-family: 'Merriweather', sans-serif;
}
h1,
h2 {
font-weight: bold;
}
p {
font-size: 16px;
color: #ffffff;
}
.jumbotron {
background: #0048CD;
color: white;
text-align: center;
}
.jumbotron p {
color: white;
font-size: 26px;
}
.btn-primary {
color: #fff;
text-color: #000000;
border-color: white;
margin-bottom: 5px;
}
img,
video,
audio {
margin-top: 20px;
max-width: 80%;
}
div.caption: {
float: left;
clear: both;
}
In addition to setting font and color, this file also limits the size of the images by specifying a max-width
of 80%. This will prevent them from taking up more room than we would like on the page.
Save and close the file when you are finished.
With the application files in place and the project dependencies installed, you are ready to start the application.
If you followed the initial server setup tutorial in the prerequisites, you will have an active firewall permitting only SSH traffic. To permit traffic to port 8080
run:
- sudo ufw allow 8080
To start the application, make sure that you are in your project’s root directory:
- cd ~/node_project
Start the application with node app.js
:
- node app.js
Navigate your browser to http://your_server_ip:8080
. You will load the following landing page:
Click on the Get Shark Info button. The following information page will load:
You now have an application up and running. When you are ready, quit the server by typing CTRL+C
. We can now move on to creating the Dockerfile that will allow us to recreate and scale this application as desired.
Your Dockerfile specifies what will be included in your application container when it is executed. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.
Following these guidelines on building optimized containers, we will make our image as efficient as possible by minimizing the number of image layers and restricting the image’s function to a single purpose — recreating our application files and static content.
In your project’s root directory, create the Dockerfile:
- nano Dockerfile
Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application that will form the starting point of the application build.
Let’s use the node:10-alpine
image, since at the time of writing this is the recommended LTS version of Node.js. The alpine
image is derived from the Alpine Linux project, and will help us keep our image size down. For more information about whether or not the alpine
image is the right choice for your project, please review the full discussion under the Image Variants section of the Docker Hub Node image page.
Add the following FROM
instruction to set the application’s base image:
FROM node:10-alpine
This image includes Node.js and npm. Each Dockerfile must begin with a FROM
instruction.
By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root. It is a recommended security practice to avoid running containers as root and to restrict capabilities within the container to only those required to run its processes. We will therefore use the node user’s home directory as the working directory for our application and set them as our user inside the container. For more information about best practices when working with the Docker Node image, check out this best practices guide.
To fine-tune the permissions on our application code in the container, let’s create the node_modules
subdirectory in /home/node
along with the app
directory. Creating these directories will ensure that they have the permissions we want, which will be important when we create local node modules in the container with npm install
. In addition to creating these directories, we will set ownership on them to our node user:
...
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
For more information on the utility of consolidating RUN
instructions, read through this discussion of how to manage container layers.
Next, set the working directory of the application to /home/node/app
:
...
WORKDIR /home/node/app
If a WORKDIR
isn’t set, Docker will create one by default, so it’s a good idea to set it explicitly.
Next, copy the package.json
and package-lock.json
(for npm 5+) files:
...
COPY package*.json ./
Adding this COPY
instruction before running npm install
or copying the application code allows us to take advantage of Docker’s caching mechanism. At each stage in the build, Docker will check whether it has a layer cached for that particular instruction. If we change package.json
, this layer will be rebuilt, but if we don’t, this instruction will allow Docker to use the existing image layer and skip reinstalling our node modules.
To ensure that all of the application files are owned by the non-root node user, including the contents of the node_modules
directory, switch the user to node before running npm install
:
...
USER node
After copying the project dependencies and switching our user, we can run npm install
:
...
RUN npm install
Next, copy your application code with the appropriate permissions to the application directory on the container:
...
COPY . .
This will ensure that the application files are owned by the non-root node user.
Finally, expose port 8080
on the container and start the application:
...
EXPOSE 8080
CMD [ "node", "app.js" ]
EXPOSE
does not publish the port, but instead functions as a way of documenting which ports on the container will be published at runtime. CMD
runs the command to start the application — in this case, node app.js
. Note that there should only be one CMD
instruction in each Dockerfile. If you include more than one, only the last will take effect.
There are many things you can do with the Dockerfile. For a complete list of instructions, please refer to Docker’s Dockerfile reference documentation.
The complete Dockerfile looks like this:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "app.js" ]
Save and close the file when you are finished editing.
Before building the application image, let’s add a .dockerignore
file. Working in a similar way to a .gitignore
file, .dockerignore
specifies which files and directories in your project directory should not be copied over to your container.
Open the .dockerignore
file:
- nano .dockerignore
Inside the file, add your local node modules, npm logs, Dockerfile, and .dockerignore
file:
node_modules
npm-debug.log
Dockerfile
.dockerignore
If you are working with Git then you will also want to add your .git
directory and .gitignore
file.
Save and close the file when you are finished.
You are now ready to build the application image using the docker build
command. Using the -t
flag with docker build
will allow you to tag the image with a memorable name. Because we are going to push the image to Docker Hub, let’s include our Docker Hub username in the tag. We will tag the image as nodejs-image-demo
, but feel free to replace this with a name of your own choosing. Remember to also replace your_dockerhub_username
with your own Docker Hub username:
- sudo docker build -t your_dockerhub_username/nodejs-image-demo .
The .
specifies that the build context is the current directory.
It will take a minute or two to build the image. Once it is complete, check your images:
- sudo docker images
You will receive the following output:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 8 seconds ago 73MB
node 10-alpine f09e7c96b6de 3 weeks ago 70.7MB
It is now possible to create a container with this image using docker run
. We will include three flags with this command:
-p
: This publishes the port on the container and maps it to a port on our host. We will use port 80
on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, review this discussion in the Docker docs on port binding.-d
: This runs the container in the background.--name
: This allows us to give the container a memorable name.Run the following command to build the container:
- sudo docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo
Once your container is up and running, you can inspect a list of your running containers with docker ps
:
- sudo docker ps
You will receive the following output:
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e50ad27074a7 your_dockerhub_username/nodejs-image-demo "node app.js" 8 seconds ago Up 7 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo
With your container running, you can now visit your application by navigating your browser to your server IP without the port:
http://your_server_ip
Your application landing page will load once again.
Now that you have created an image for your application, you can push it to Docker Hub for future use.
By pushing your application image to a registry like Docker Hub, you make it available for subsequent use as you build and scale your containers. We will demonstrate how this works by pushing the application image to a repository and then using the image to recreate our container.
The first step to pushing the image is to log in to the Docker Hub account you created in the prerequisites:
- sudo docker login -u your_dockerhub_username
When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json
file in your user’s home directory with your Docker Hub credentials.
You can now push the application image to Docker Hub using the tag you created earlier, your_dockerhub_username/nodejs-image-demo
:
- sudo docker push your_dockerhub_username/nodejs-image-demo
Let’s test the utility of the image registry by destroying our current application container and image and rebuilding them with the image in our repository.
First, list your running containers:
- sudo docker ps
You will get the following output:
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e50ad27074a7 your_dockerhub_username/nodejs-image-demo "node app.js" 3 minutes ago Up 3 minutes 0.0.0.0:80->8080/tcp nodejs-image-demo
Using the CONTAINER ID
listed in your output, stop the running application container. Be sure to replace the highlighted ID below with your own CONTAINER ID
:
- sudo docker stop e50ad27074a7
List your all of your images with the -a
flag:
- docker images -a
You will receive the following output with the name of your image, your_dockerhub_username/nodejs-image-demo
, along with the node
image and the other images from your build:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 7 minutes ago 73MB
<none> <none> 2e3267d9ac02 4 minutes ago 72.9MB
<none> <none> 8352b41730b9 4 minutes ago 73MB
<none> <none> 5d58b92823cb 4 minutes ago 73MB
<none> <none> 3f1e35d7062a 4 minutes ago 73MB
<none> <none> 02176311e4d0 4 minutes ago 73MB
<none> <none> 8e84b33edcda 4 minutes ago 70.7MB
<none> <none> 6a5ed70f86f2 4 minutes ago 70.7MB
<none> <none> 776b2637d3c1 4 minutes ago 70.7MB
node 10-alpine f09e7c96b6de 3 weeks ago 70.7MB
Remove the stopped container and all of the images, including unused or dangling images, with the following command:
- docker system prune -a
Type y
when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.
You have now removed both the container running your application image and the image itself. For more information on removing Docker containers, images, and volumes, please review How To Remove Docker Images, Containers, and Volumes.
With all of your images and containers deleted, you can now pull the application image from Docker Hub:
- docker pull your_dockerhub_username/nodejs-image-demo
List your images once again:
- docker images
Your output will have your application image:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 11 minutes ago 73MB
You can now rebuild your container using the command from Step 3:
- docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo
List your running containers:
- docker ps
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6bc2f50dff6 your_dockerhub_username/nodejs-image-demo "node app.js" 4 seconds ago Up 3 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo
Visit http://your_server_ip
once again to view your running application.
In this tutorial you created a static web application with Express and Bootstrap, as well as a Docker image for this application. You used this image to create a container and pushed the image to Docker Hub. From there, you were able to destroy your image and container and recreate them using your Docker Hub repository.
If you are interested in learning more about how to work with tools like Docker Compose and Docker Machine to create multi-container setups, you can look at the following guides:
For general tips on working with container data, check out:
If you are interested in other Docker-related topics, please find our complete library of Docker tutorials.
]]>NestJS is a powerful framework for building efficient, scalable Node.js server-side applications Reference. Guards are one of the important features of NestJS, as they allow you to enforce various types of authorization and authentication in your application. They allow you to set up and enforce specific rules to limit users that can access certain routes in the application. With guard implementation, you can be sure that only authorized users are able to access routes that cannot be publicly accessed. This helps to prevent unauthorized user access and also protects against potential attacks such as data breaches.
In this tutorial, you will take a deep dive into understanding what Guards are, how they work, and how you can use them effectively in your application. First, you will be implementing an AuthGuard
and understanding how it works in protecting endpoints from unauthorized access.
Secondly, you will explore how to bind guards at different scoped levels (global, controller, and method level), which will allow you to implement granular authentication checks. Next, you will learn how to use multiple guards and skip authentication checks for specific endpoints to make them publicly accessible without the need for authorization.
At the end of this guide, you will have a thorough understanding of how to implement Guards in NestJS applications effectively and build secured and scalable applications that are protected from unauthorized access.
To follow this tutorial, you will need:
The name of the project folder that will be created will be called guard
. The NestJS application will be bootstrapped from the Nest CLI using the command:
nest new guard
This will bootstrap the NestJS application and provide the necessary files that are needed for the implementation of guards in NestJS. You will see the following output in your CLI once your application has been created:
🚀 Successfully created project guard
👉 Get started with the following commands:
$ cd guard
$ yarn run start
Once the installation is complete, change your directory to the new directory folder(guard) that you have just created using the command.
cd guard
Then start the application with:
yarn start:dev
This will start the development server at port 3000, and you will see an output like below in your terminal:
OutputFile change detected. Starting incremental compilation...
Found 0 errors. Watching for file changes.
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [NestFactory] Starting Nest application...
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [InstanceLoader] AppModule dependencies initialized +34ms
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [RoutesResolver] AppController {/}: +5ms
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [RouterExplorer] Mapped {/, GET} route +4ms
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [RouterExplorer] Mapped {/test, GET} route +0ms
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [RouterExplorer] Mapped {/public, GET} route +1ms
[Nest] 48025 - 05/14/2023, 9:13:33 AM LOG [NestApplication] Nest application successfully started +4ms
Once the application is up and running at port 3000
, go to your postman and request the URL http://localhost:3000
and you’ll see the page successfully loaded as shown below:
After the bootstrapping of the project, open the project folder in your favorite code editor and create a new folder under the src
folder called guards
. Then, create a file inside the guard’s folder called auth.guard.ts.
The implementation of the AuthGuard
will be done inside this auth.guard.ts
file.
Next, you will create an AuthGuard
that will protect the routes in the application.
To start the auth guard implementation, the canActivate
interface will be implemented. Every guard created in NestJS must implement the canActivate
function. The canActivate
function usually returns a boolean, i.e. it specifies if the current request being sent by the user to an endpoint should go through or not. The returned boolean value (whether true
or false
) is what NestJS uses to control the next action i.e:
The below code block shows the implementation of the AuthGuard
in the auth.guard.ts
file created earlier:
import { CanActivate, ExecutionContext, Injectable } from '@nestjs/common';
import { Observable } from 'rxjs';
@Injectable()
export class AuthGuard implements CanActivate {
canActivate(
context: ExecutionContext,
): boolean | Promise<boolean> | Observable<boolean> {
const request = context.switchToHttp().getRequest();
const apiKey = request.headers;
if (apiKey && apiKey.api_key === 'MY_API_KEY') {
return true;
}
return false;
}
}
Here is a simple AuthGuard
that implements an authorization guard which allows the request to a route handler or endpoint to be processed if the user has a valid api_key
. The ExecutionContext
interface has access to the request object through the switchToHttp().getRequest()
method. The headers needed to perform the guard authentication can be accessed from this request object. The request object is retrieved from the context params, and then a variable named apiKey
is used to store the request.headers
value that the user is passing when making the request. Suppose the apiKey
value is present in the request headers, and it is equal to MY_API_KEY
, which represents the secret api_key
value to access that route. In that case, the request should proceed, if the user inputs another value apart from our secret api_key
the request should not be processed and an unauthorized error should be thrown as explained earlier.
To make the guard protect the routes in the app.controller.ts
file, import the AuthGuard
function and bind it to the scope of the controller using the @UseGuards
decorator as shown below:
@Controller()
@UseGuards(AuthGuard)
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
@Get('test')
test(): string {
return 'This is a Test Route';
}
}
Let’s test this out by requesting PostMan:
In the first image, the request is made to the endpoint localhost:3000
which is expected to return a string: Hello World
. The api_key
passed in the headers has a value of MY_API
. When the request is sent, a 403 Forbidden Error is thrown. This is because the AuthGuard
have a return value of false
, since the value passed (MY_API
) does not match the correct apiKey
value we specify in the guard. For the second image, the correct api_key
was passed in the request headers with the correct apiKey value (MY_API_KEY
) and the request went through with a response of Hello Guards
returned. This implies that only users with the secret key - MY_API_KEY
can access the routes in this controller file of the application.
Next, you will explore the ways of binding guards.
What does the binding of guards mean? Basically, when we created the AuthGuard
function in the previous step, the guard is standalone and cannot protect any routes unless we bind it at the global level using the useGlobalGuards
method or to the controller or method level using the UseGuards
decorator
Just like interceptors
, pipes
and exception filters
, guards can be global-scoped, controller-scoped or method-scoped.
The @UseGlobalGuards
method is used to bind guards globally and the AuthGuard
can be bound globally in the main.ts
file as shown below:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { AuthGuard } from './guards/auth.guard';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalGuards(new AuthGuard());
await app.listen(3000);
}
bootstrap();
Here, the AuthGuard
is bound at the global level using the useGlobalGuards()
method of the Nest application instance. If we go to any route in the application and pass in the wrong api_key
value, a forbidden resource error for unauthorized access will be thrown, if the right api_key
is passed to the request headers, it will return a 200 success message and the endpoint will be accessed successfully.
For controller-scoped
guard binding, the @UseGuards
decorator will be applied under the Controller
decorator to guard all the routes in that particular controller only as shown below:
import { Controller, Get, UseGuards } from '@nestjs/common';
import { AppService } from './app.service';
import { AuthGuard } from './guards/auth.guard';
@Controller()
@UseGuards(AuthGuard)
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
@Get('test')
test(): string {
return 'This is a Test Route';
}
}
The above piece of code infers that the AuthGuard
will be applied to the two routes in the app.contorller.ts
file.
The binding guard at the method level is similar to the controller-level binding, the @UseGuards
decorator is used to guard a specific route in this case as shown below:
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
@Get('test')
@UseGuards(AuthGuard)
test(): string {
return 'This is a Test Route';
}
}
In this case only the route /test will be guarded by the AuthGuard, while the other route Get() will be publicly accessed as shown in the image below:
Next is understanding how multiple guards can be used in NestJS.
NestJS allows us to apply more than one guard at a time, either at the controller
or method
level, and these guards will be executed in the order in which they are bound. Let’s create a new file in the guards
folder called business.guard.ts
and create a guard that prevents users from accessing routes if they do not have a businessID.
import { Injectable, CanActivate, ExecutionContext } from '@nestjs/common';
import { Observable } from 'rxjs';
@Injectable()
export class BusinessGuard implements CanActivate {
canActivate(
context: ExecutionContext,
): boolean | Promise<boolean> | Observable<boolean> {
const request = context.switchToHttp().getRequest();
const businessID = request.headers['business_id'];
if (businessID && businessID === '892367480') {
return true;
}
return false;
}
}
The BusinessGuard
function implements the canActivate
method which is used to extract the business_id
in the request header from the request object using the getRequest
method of the context object. As mentioned earlier, this guard checks whether the business_id
exist in the header of the request and if the value passed by the user matches the predefined business_id
value (892367480). If the business_id key is not passed or the value passed does not match the predefined value, the guard returns false, indicating that access to the route should be denied, otherwise it returns true, which means that access should be allowed.
Now, let’s bind the BusinessGuard
and the AuthGuard
on the ‘/test’ route.
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
@Get('test')
@UseGuards(AuthGuard, BusinessGuard)
test(): string {
return 'This is a Test Route';
}
}
When a request is being made to the /test route, the AuthGuard
is called first to ensure the conditions specified in the AuthGuard
is good for the request to be processed if it returns true (i.e. api_key is passed in the request header), then BusinessGuard
will then be applied. This is demonstrated below:
In the first image, only the api_key was passed, hence the unauthorized access error, while in the second image, both the api_key
and business_id
was passed passing the multiple guards that were specified for the ‘/test’ route.
Next is implementing Authorization Check Skipping for specific endpoints.
When developing a NestJS application and implementing guards for endpoint authorization, you may have a controller that uses a guard on all routes. Still, there may be a specific endpoint where you want to omit authorization and make it publicly accessible while the guard is protecting the others.
To achieve this, a metadata
that provides extra information and context will be added to the guard instance. Create an auth.metadata.decorator
file in the src
directory. The metadata will be set with the key - authorized for the endpoint that will be skipped.
import { SetMetadata } from '@nestjs/common';
export const AuthMetaData = (...metadata: string[]) =>
SetMetadata('authorized', metadata);
In the piece of code above, there is an AuthMetaData
function that calls the SetMetaData
decorator, which takes in two arguments, the authorized
string and the metadata array, which was passed in as an argument in the AuthMetaData
function.
Then, the auth.guard.ts
file will be modified to check if any authorization metadata is being set to any controller or method that is being guarded before processing the request.
const authMetaData = this.reflector.getAllAndOverride<string[]>(
'authorized',
[context.getHandler(), context.getClass()],
);
if (authMetaData?.includes('SkipAuthorizationCheck')) {
return true;
}
Here, the reflector
and the getAllAndOverride
function are used to get the metadata that was set in the AuthMetaData
decorator. The method takes two arguments, first one is the metadata key needed to be retrieved (authorized) and then, an array of objects the getAllAndOverride function will look at to get the metadata.
Inside the getAllAndOverride
method, the handler and class of the current execution context are passed so that when the method runs, it checks for the metadata with the key “authorized” associated with the current handler. Then a check is made that if the authMetaData
retrieved includes the string SkipAuthorizationCheck
, then the authorization should be skipped for that route.
Now, in the app.controller.ts
file, create a new route called public
; the guard will be bound at the controller level, while the AuthMetaData
decorator will be used to skip the authorization guard/check with the metadata string for the public
route only.
@Controller()
@UseGuards(AuthGuard)
export class AppController {
constructor(private readonly appService: AppService) {}
@Get()
getHello(): string {
return this.appService.getHello();
}
@Get('test')
test(): string {
return 'This is a Test Route';
}
@Get('public')
@AuthMetaData('SkipAuthorizationCheck')
getPublic(): string {
return 'public';
}
}
Let’s test this out:
In the first image, a request is being made to the / endpoint and an unauthorized error was thrown, for the second image, the request was made to the /test route, and the same error was thrown. In the last image where the request is being made to the /public endpoint, a 200 success response was returned, which shows that the AuthGuard has been bypassed for this route only using the AuthMetada decorator.
Note: All the examples used in this guide are basic examples for you to understand the fundamentals of guards in NestJS. In your real-world applications, the guard may vary from basic to complex, depending on the application you are building, and you will also need to store sensitive information in an env file (such as the API keys) so it is not being made available to the public.
In this article, you took a deep dive into understanding what Guards are in NestJS, how to create an Authentication guard, binding of guards, using multiple guards in the NestJS application, and Skipping Guard checks.
From here, you can learn how to create Role-Based Access Control Authorization as well as learn about Authentication in NestJS.
You will find the complete source code of this tutorial here on GitHub.
Note: The Github repository consists of 4 branches. A master branch and one branch for each step starting from Steps 2-5. Switch in between branches for code testing. Thank you.
Hello- Currently recieving an error when attempting to deploy a svelte vite app. Does anyone know what is needed to be altered to correct the error described below?
The code builds & runs successfully locally. When in the app it is set up to deploy from a github repo. The code builds successfully & then in the “Deploy logs” this is the error:
appname@1.0.0 preview
vite preview 0.0.0.0
Local: [http://localhost:4173/](http://localhost:4173/)
Network: use --host to expose
The package.json file looks like this:
"scripts": {
"dev": "vite dev",
"start": "vite dev",
"build": "vite build",
"prod": "vite dev -- --host 0.0.0.0 --port 8080",
"preview": "vite preview",
"test": "npm run test:integration && npm run test:unit",
"check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
"check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch",
"lint": "prettier --check . && eslint .",
"format": "prettier --write .",
"test:integration": "playwright test",
"test:unit": "vitest",
"postbuild": "node --experimental-json-modules ./generate-sitemap.js"
},
What needs to be altered to have a successful deployment?
]]>The code builds & runs successfully locally. When in the app it is set up to deploy from a github repo. The code builds succcessfully & then in the “Deploy logs” this is the error:
ERROR: failed to launch: determine start command: process type web was not found
in the package.json this is what the scripts sections looks like:
"scripts": {
"dev": "vite dev",
"build": "vite build",
"prod": "vite dev --host 0.0.0.0 --port 8080",
"test": "npm run test:integration && npm run test:unit",
"check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json",
"check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch",
"lint": "prettier --check . && eslint .",
"format": "prettier --write .",
"test:integration": "playwright test",
"test:unit": "vitest",
"postbuild": "node --experimental-json-modules ./generate-sitemap.js"
},
]]>Under Activity > Deployment > Logs > Build Logs there is a check next to service, and the logs say ✔ build complete in green.
Under Activity > Deployment > Logs > Deploy Logs there’s a red [!], and after I ran it it said it failed to deploy. But in these logs, I see the normal logs from my app starting up and even starting to work, but then it ends with no error or anything.
Why did it say it failed if my app seemed to be working normally and shows no error?
I did find it strange that it said the build was in progress after my app had already started, so maybe I configured something wrong, like it expects a build script that ends once it’s ready or something?
This is for a discord bot using node.js that will run continuously.
]]>Modern web developers always deal with inputs, whether they be web forms, file uploads, or any other input on their websites. Therefore, it becomes essential for them to ensure that the inputs they receive from users are legit and harmless.
With advancements in web development, attackers have developed more and have found ways of exploiting websites from different aspects, such as form inputs. Attacks like Cross-Site Scripting (XSS) and SQL Injection have become more common and harmful than ever before. Therefore, any developer needs to ensure that the data they receive from a user is non-malicious or sanitized to ensure the integrity of their web application and server.
In this tutorial, you will learn how to use the Express-Validator package of JavaScript to validate and sanitize user inputs in web forms to ensure data consistency and server security.
To understand this tutorial properly, you should have the following setup:
Install Node Js with version 14.x or higher on your system. You can use this DO tutorial to install the latest Node Js version on your system.
You should know how to code in JavaScript.
You should have Express JS version 4.x or higher installed on your system. You can use this to set up an Express server.
After meeting these requirements, you can continue with the tutorial.
Input validation and Sanitization is a key task for a developer for various reasons mentioned below:
Prevention from Injection Attacks: Consider a scenario where your application is vulnerable to SQL Injection attacks. This vulnerability arises due to the poor handling of SQL code during user authentication through an authentication form. An attacker can exploit this by passing some malicious SQL code instead of user credentials and gaining access to the server, which is game over for the application.
Data Integrity and Consistency: When user input is validated, it creates consistency in data being stored in servers, thus making it complicated to work with the data. For example, if a user can send text data in input for age, it creates inconsistency in the data stored in the server.
Data Compatibility: The data type must be consistent when data is used at various endpoints in a big organization. For instance, if users can enter garbage data instead of a proper email in their email credentials, it could cause complications when the organization needs to contact the user.
Enhanced User Experience: When inputs are validated, a developer can create logic to send appropriate and immediate feedback to users, allowing them to correct the invalid input they provided. This enhances the overall User Experience.
There are many such reasons to ensure that, as a developer, you handle the inputs in your web forms and websites efficiently and securely. In the following sections, you will learn how to create validation logic for your form inputs using the Express-Validator
package in an Express Js app.
Validator – A validator is a function that takes input and performs certain checks based on some criteria.
Sanitizer – A sanitizer is a function used to modify or cleanse input data to ensure that it is secure and adheres to required formats.
Validation Chain – A validation chain in Express-Validator is a sequence of validators or sanitizers applied on an input. For example, assume that you have a form where you want users to enter their email and, to keep your data in the database consistent, you want no whitespaces to be allowed in the input (at left or right ends). You can use a validation chain like .isEmail()
to achieve this.trim()
[You will learn about these in a later section]. This will first check whether the input is an email; if it is email data, then the trim()
sanitizer will remove whitespaces from both ends of the input.
Now, you will learn to implement input validation and sanitization techniques in the following sections of this tutorial. Firstly, you will set up an npm project and then install Express Js, Express-Validator, and other dependencies required to follow the tutorial.
To create an Express server, first, you need to create an npm project. Firstly, open a terminal and type the following commands:
cd <project directory>
npm init
Then, you will be asked for inputs asking information about the application; you can either enter the specific details or continue pressing ‘Enter.’ Now, create an index.js file in the project folder; this will be the entry point for the server. Lastly, you must install the Express Js package to create an Express server. This can be done with the following command.
npm install express
You now have the Express Js installed in your project. The next task is to install Express-Validator,
which can be done with the following command.
npm install Express-Validator
We have satisfied basic dependencies till now; however, you can install nodemon
in your project, which is useful to keep a Node Js app running even if some error occurs. Type the following command to install it.
npm install nodemon
We now have installed all requirements and can start working on the server and input validation in forms. This project will be the base structure used in all the later sections, where each section explains a different topic.
The next section explains how Express-Validator
works behind the scenes and how you can use it in forms to perform input validations on various fields.
Express-Validator
is a combination of middleware provided by the Express JS module and the Validator.js module, which provides validators and sanitizers for string data types. Express-Validator
provides the functionality to validate input data by using a validation chain.
In this section, you will learn to use the validators in Express-Validator
using the Node Js project set up in the previous section. For this, you will create a route in Express Server that allows users to sign up for a web application with their credentials (name, email, password). Then, you will create middleware to validate the inputs every time a user signs up for your server.
Open the index.js file and type the following code into it.
- const express = require("express");
- const app = express();
- const { body, validationResult } = require("express-validator");
-
- // Express.js middleware to use JSON objects
- app.use(express.json());
-
- app.post(
- "/signup",
- // using validation to verify valid inputs (MIDDLEWARE)
- [
- [
- body("name").notEmpty(),
- body("email").isEmail(),
- body("password").notEmpty(),
- ],
- ],
- async (req, res) => {
- const errors = validationResult(req);
-
- if (!errors.isEmpty()) {
- return res.status(400).json({ errors: errors.array() });
- }
-
- res.status(200).json({success:'Successful Sign Up!'})
- }
- );
-
- // Server Listening at port 3000
- const port = 3000;
- app.listen(port, () => {
- console.log(`Listening at http://localhost:${port}`);
- });
Here, we import the Express Js and then create an Express app. Then, we use JavaScript destructing to import the body and validationResult functions from the Express-Validator
.
Info
The body()
method is used to create validation chains for validating and selecting input data from request payloads( req) from users, such as data sent by a POST request.
The validationResult()
method stores the result of a validation chain of an HTTP request as a JavaScript object. If this object is empty, it implies that the payload has passed all validation tests; if not, it stores information about the payload and the criteria it did not satisfy.
Then, we use the JSON middleware of Express J to work with JSON objects.
Finally, we create an Express route using the HTTP - POST (You can work with any other type of request, such as GET, PUT, etc.) request. The route we have created here represents a route to a signup form on a web server. The form takes input using client-side HTML, and the server handles it by accessing the input fields of the web form using the body of HTML DOM.
The body()
method of the Express-Validator
fetches the values of HTML components having the name attribute same as the argument of the body()
method; i.e., body(“password”)
will fetch the value of HTML input component having the name attribute as password.
Then, on these values, you can use the validators provided by Express-Validator
. In this example, we have used two validators, isEmail()
and notEmpty()
. These validators determine whether the input is email and not empty, respectively. If the input does not match the criteria of the validator(s), the Express-Validator will throw an error as an object in the form of the validationResult()
. However, if the input matches the criteria of the applied validators, the validationResult()
object will be empty. This is then used inside the function definition of the ‘/signup’ route to create a simple check using if
statement. Suppose the validationResult()
array is not empty. In that case, an error with response code 400 is sent to the client. Otherwise, in this example, a message of successful sign-up is sent to the client (This is just for exemplary purposes; you can perform any task, such as storing the input data in a database, using the input credentials to authenticate a user, etc.). Then, we host the server on localhost at port 3000.
These basic validators were used in this example; however, you can use any other validator offered by the Express-Validator
package. In the following table, you can find some commonly used validators, which can be used in the same manner as isEmail()
and notEmpty()
.
Validator | Usage |
---|---|
isDate | Checks if the Input is a Date object. |
isEmpty | Checks if the Input is an Empty string. |
isHash(algorithm) | Checks if the Input is a hash. The argument takes the hashing algorithm such as MD5, SHA, etc. |
isJWT | Checks if the Input is a JWT(JavaScript Web Token). |
isURL | Checks if the Input is a URL. |
isBoolean | Checks if the Input is Boolean(True or False). |
isNumeric | Checks if the Input is of numeric data type. |
isAlphanumeric | Checks if the Input is Alphanumeric. |
isInt | Checks if the Input is an Integer type. |
isDecimal | Checks if the Input is in Base Decimal. |
isFloat | Checks if the Input is a Floating Point Number. |
Now, in the following section, you will learn how to validate file inputs to make file uploads through your form (such as a photograph or a signature) secure and controlled.
In the previous section, you learned how to use the Express-Validator
to validate inputs generally passed to web forms. In this section, you’ll learn how to handle file inputs and validate them before uploading them to a server to avoid uploading malicious files on your server. In addition to this security benefit, you can put constraints on the file input size, ensuring no large files fill up your server.
We will use the multer
package of Node Js to handle file uploads using forms. You can use this extensive DO tutorial to understand how multer
works, Uploading files with multer in Node.js and Express. Now, Express-Validator
does not provide specific functionality for file input handling; however, you can combine the multer
library as an extension to Express-Validator
for handling file input validation for your web forms. Before using multer
, you must install the package in your Node Js project, which can be done by following the command.
cd <project folder>
npm install multer
Now, you will see how to integrate multer
with your Express server using Express-Validator
. In the following code, you will implement multer
and Express-Validator
and put some constraints on uploaded files.
- const express = require("express");
- const app = express();
- const { body, validationResult } = require("express-validator");
- const multer = require("multer");
-
- // Express.js middleware to use JSON objects
- app.use(express.json());
-
- // CREATING MIDDLEWARE USING MULTER TO HANDLE UPLOADS
-
- // creating a storage object for multer
- const storage = multer.diskStorage({
- // providing the destinations for files to be stored in server
- destination: "./uploads/",
- });
-
- // defining file storage to local disk and putting constraints on files being uploaded
- const upload = multer({
- storage: storage,
- limits: { fileSize: 1*1024*1024 }, // file size in bytes
-
- // you can add other constraints according to your requirement such as file type, etc.
- });
-
- app.post(
- "/fileUpload",
-
- // input validation for files to ensure only a single file is uploaded in one request
- upload.single("fileUpload"),
-
- // using validation to verify valid inputs
- [
- body("fileDescription").notEmpty(),
- // you can add as many validators as you require
- ],
- async (req, res) => {
- const errors = validationResult(req);
-
- if (!errors.isEmpty()) {
- return res.status(400).json({ errors: errors.array() });
- }
- const file = req.file;
- // perform any other tasks on the file or other form inputs
-
- res.status(200).json({ success: "Successful Sign Up!" });
- }
- );
-
- // Server Listening at port 3000
- const port = 3000;
- app.listen(port, () => {
- console.log(`Listening at http://localhost:${port}`);
- });
Here, we have defined the necessary imports and the Express JSON middleware to work with JSON objects over a network. Then, we define the storage-server for multer
to work. The example is explained in detail below:
First, we define a storage category. Multer
provides both disk storage and memory storage(buffer). In this example, we have used disk storage to store uploaded files on a server’s local disk instead of wasting memory.
When using disk storage, it is necessary to provide the directory location where uploaded files will be stored. This can be done using the destination keyword, which takes the path as a value. Here, we have used an uploads folder in the project directory itself.
After this, we create an upload object which handles the upload of files. This calls the multer
import and takes a JavaScript object as input. This JavaScript object contains various constraints/validations to be satisfied before storing the input file on disk. Storage is a necessary argument that will take the storage object as input, defined before this.
We have added another validation using the limits option to ensure no file is greater than 1 MB. Note that it counts size in bytes.
Info: You can make other validations/checks in the limits object according to your requirement.
Then, we create an Express route to handle file upload and add a middleware upload.single()
, ensuring that only one file is uploaded in one request. It takes the input field named fileUpload from HTML DOM as the required argument.
After this, we add Express-Validator middleware, as explained in the previous section.
Lastly, we create the logic to handle the route and check for any errors in input validation.
You can access the uploaded file(s) from your route logic from the req.file object.
Here, you learned how to handle file input validation using Multer
and integrate it with the Express-Validator
validation middleware. In addition, you can add many other advanced or custom validation methods in Multer
and Express-Validator. In the next section, you will learn how to sanitize input values, which is necessary to prevent attacks like SQL Injection.
So far, you have learned how to perform input validations on data and file input. Now, you will extend that knowledge by learning how to sanitize (validated) inputs. Express-Validator
provides many sanitizers from the Validator.js library.
For explaining sanitizers, we will use a similar approach as the ‘Basic form validation techniques in Express-Validator
and add sanitizers in the route.
- const express = require("express");
- const app = express();
- const { body, validationResult } = require("express-validator");
-
- // Express.js middleware to use JSON objects
- app.use(express.json());
-
-
- app.post(
- "/sanitizedInput",
- // sanitizing inputs
- [
- [
- body("name").notEmpty().trim(),
- body("email").isEmail().trim(),
- body("dob").toDate(),
- ],
- ],
- async (req, res) => {
- const errors = validationResult(req);
-
- if (!errors.isEmpty()) {
- return res.status(400).json({ errors: errors.array() });
- }
- // remaining logic for your route
- res.status(200).json({success:'Successful Sign Up!'})
- }
- );
-
- // Server Listening at port 3000
- const port = 3000;
- app.listen(port, () => {
- console.log(`Listening at http://localhost:${port}`);
- });
You create another Express server by importing necessary packages and using the Express JSON middleware to handle JSON objects. Then, you will create a route to handle sanitized input (/sanitizedInput
) and write the route logic.
Again, you need to add Express-Validator
middleware. Here, we use three input fields: name
, email
, and dob
. First, we check if the name is not empty using notEmpty()
and then apply the trim()
sanitizer on it, which removes whitespaces from both ends of the input value. Similarly, for the email input field, we first check whether it is a valid email using isEmail()
and then apply trim()
sanitizer to remove whitespaces from both ends.
In the dob
input field, we use the toDate()
sanitizer (without any validator) to convert the input string into a date object. If the string is not an actual date, toDate()
will return a null value.
After that, you will perform the check to ensure that there are no errors stored in the validationResult()
object and can continue with the rest of the logic for your route.
Note:
Using multiple validators and sanitizers on a single input, as done with body("name").notEmpty().trim()
, creates a chain of validators/sanitizers. Such a chain or sequential use of validators/sanitizers creates a validation chain.
There are plenty more sanitizers available with Express-Validator
, such as blacklist()
, whitelist()
, etc. You can use them according to your requirements.
In this tutorial, you learned how to effectively handle form inputs on your server. You built servers using the Express-Validator
library that used input validation and sanitization techniques. You can build upon this knowledge and extend your input validation further by using different validators and sanitizers provided by the same library.
Express-Validator
and Multer
both allow you to create custom validators, which you can try as an exercise to increase your productivity in complex projects.
uptime
just gives load average as load average: 0.0, 0.0, 0.0
whereas in both my local dev (WSL) and docker container has the proper load avg which I’m later using in my node app with func: os.loadavg().
I assume there are permission errors but isn’t there any way to get system load info i.e. cpu, mem, etc. in a web app env or will I need to recreate as a droplet? In the tutorial they also run uptime
with no issues.
Thanks
]]>As a last resort, I’m considering creating a custom Docker image, but that seems a bit excessive…
]]>After resizing it. We could no longer fetch from the server, even though the application is 100% running, I’ve checked many times, via pm2. Logs are being printed out constantly.
I’ve read countless threads and tried countless solutions, none that worked so far. Port seems fine. Firewalls are disabled.
The 2 errors we are getting are net::ERR_CONNECTION_REFUSED and TypeError: Failed to fetch.
The application runs fine on localhost, just not on the droplet. Would appreciate any help. If there’s any further information needed please let me know.
Edit: Not sure how relevant, but I ran “service nginx status” and got the following:
● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2024-01-03 21:00:38 UTC; 13h ago Docs: man:nginx(8) Process: 903 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Jan 03 21:00:37 glorious-nexus-api systemd[1]: Starting A high performance web server and a reverse proxy server… Jan 03 21:00:37 glorious-nexus-api nginx[903]: nginx: [alert] could not open error log file: open() “/var/log/nginx/error.log” failed Jan 03 21:00:38 glorious-nexus-api nginx[903]: 2024/01/03 21:00:37 [emerg] 903#903: open() “/var/log/nginx/access.log” failed (2: No Jan 03 21:00:38 glorious-nexus-api nginx[903]: nginx: configuration file /etc/nginx/nginx.conf test failed Jan 03 21:00:38 glorious-nexus-api systemd[1]: nginx.service: Control process exited, code=exited status=1 Jan 03 21:00:38 glorious-nexus-api systemd[1]: nginx.service: Failed with result ‘exit-code’. Jan 03 21:00:38 glorious-nexus-api systemd[1]: Failed to start A high performance web server and a reverse proxy server.
]]>Each on his own different repository on GitHub.
Usually I used to refer to the web “build” dir. from inside the server to serve it as a static dir. using a code like this
path.join(__dirname, '..', '..', 'build')
as in “Heroku” for example
Now. How can I do the same on digitalocean ? How can i refer to the “build” dir. inside the Web app resource from inside the server resource as i don’t know the dir. structure of the app ?
This is my typical response I get
{"statusCode":404,"message":"ENOENT: no such file or directory, stat '/workspace/build/index.html'"}
]]>I have created a droplet Ubuntu 20.04 (LTS) x64. Inside this server I have configured a MongoDB database (version 4.4), a NodeJS (version 14.18.3) and inside my NodeJS enviroment I have installed a frontend application (React) and a backend (Express).
I have the following status:
The issue: When I try to access MongodB through the backend “http://(iphost}:3333/login”, the backend stops working (Error: socket hang up) meaning that I might have an issue between the backend and MongodB. In more details it shows an error related to decryption (Error: Error during decryption (probably incorrect key). Original error: Error: error:04099079:rsa routines:RSA_padding_check_PKCS1_OAEP_mgf1:oaep decoding error), however my “.pem” file is correct and the address of this file is also correct.
My question: Is there any kind of limitation in terms of PORTS access that might avoid this communication between Backend (3333) and MongodB (27017)? Has anyone faced this kind of error? Any suggestion?
Important: This application (frontend+backend+mongodB) has the same configuration as I have installed and is working in my local computer.
Important(2): Unfortunatelly I cannot upgrade my system (mongodb, NodeJs and NPM) because it is a legacy application.
Many thanks and Regards
]]>I have thoroughly followed all of the steps and prerequisite tutorials in this link for my 22.04 Ubuntu droplet setup : https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-22-04
All of the steps seemed to be set correctly, I’ve checked nginx, ufw statuses, etc. Correctly set the localhost port to the port of my node.js app.
But when I try to test one of my application’s endpoint with postman - it doesn’t work.
When I call just the https://mydomain.com
, it returns the HTML of “Welcome to nginx!” page that I can see when I open the domain with the browser.
And when I try to call https://mydomain.com/endpoint
, it returns 404 error message.
Seems that somewhere the request is not being redirected/put through correctly. How would I go about debugging/fixing this?
Thank you.
]]>I constantly get build error that it can’t find module dist/server.js which is build by tsc. I don’t get any error from tsc. This error appeared out of nowhere. Locally it works totally fine.
that’s part of my package.json
:
"main": "dist/server.js",
"scripts": {
"build:digitalocean": "npm install --production=false && npm run build && npm ci",
"start": "npm run build && node dist/server.js",
"build": "tsc",
"postinstall": "tsc",
"dev": "NODE_ENV=development node dist/server.js"
},
]]>I have been trying to establish a websocket connection with my App Platform’s server component for a while and still no success. Surprisingly for me everything is working fine on my local machine.
After emitting a Socket.IO event from the Client I get only the following inside the networks’s request Preview (I don’t know how to attach the pictures in here) (DevTools screen, prnt.sc/oqAH4cvjsSxq):
You need to enable JavaScript to run this app.
I am using the node:cluster for my server. This code was doublechecked using the official Socket.IO docs for node:cluster.
if (cluster.isPrimary) {
console.log("Primary cluster is running...");
const httpServer = http.createServer(app);
// setup sticky sessions
setupMaster(httpServer, {
loadBalancingMethod: "least-connection",
});
// setup connections between the workers
setupPrimary();
const primaryClusterPort = 3265;
httpServer.listen(primaryClusterPort, () => {
// Listen on the HTTP server in the worker process
console.log(`Primary cluster running on port ${primaryClusterPort}...`);
});
const workerCount = 2;
for (let i = 0; i < workerCount; i++) {
cluster.fork();
}
cluster.on("online", function (worker) {
console.log("Worker " + worker.process.pid + " is online");
});
cluster.on("exit", function (worker, code, signal) {
console.log(
"Worker " +
worker.process.pid +
" died with code: " +
code +
", and signal: " +
signal
);
console.log("Starting a new worker");
cluster.fork();
});
} else {
const httpServer = http.createServer(app); // Create the HTTP server in the worker process
const io = new Server(httpServer, {
cors: {
origin: "*", // Allow all origins
methods: ["GET", "POST"], // Allow GET and POST methods
credentials: true, // Allow credentials
},
});
io.adapter(createAdapter());
// setup connection with the primary process
setupWorker(io);
io.on("connection", (socket) => {
socket.on("exportExcel", (data) => {
console.log("Socket on `exportExcel`: ", data);
// for test
socket.emit("exportExcel", { path: "export905.xlsx" });
});
});
httpServer.listen(PORT, () => {
// Listen on the HTTP server in the worker process
console.log(`App listening on port ${PORT}...`);
});
}
I am using Node v18.12.1, Socket.IO v4.1.3 on the server, Socket.IO v4.6.1 on the client.
I hope anybody can help me and I am ready to provide any additional information if needed. Thank you in advance.
]]>The repository is the following: https://github.com/AlexAntonioG/baevi-site
console.log(" baevi-site@1.0.0 build
[2023-11-28 05:21:52] > cross-env NODE_ENV=production webpack --progress --hide-modules
[2023-11-28 05:21:52]
[2023-11-28 05:21:57] 15% building modules 42/55 mod 19% building modules 81/111 mo 25% building modules 129/2023- 30% build2023-11-28T05:22:01.1 37% building modules 225/247 m 42023-11-28T05:22:26.388938892Z Hash: 7913dd8261056402892c/axios/lib/cancel/isCancel.js
[2023-11-28 05:22:26] Version: webpack 3.12.0
[2023-11-28 05:22:26] Time: 32397ms
[2023-11-28 05:22:26] Asset Size Chunks Chunk Names
[2023-11-28 05:22:26] bg4.jpg?b50ec0d45ed3be7f47e5c648a9ced241 145 kB [emitted]
[2023-11-28 05:22:26] baevi.png?05ef3d07ba5dded57cf70a90aa61154e 194 kB [emitted]
[2023-11-28 05:22:26] fact4.png?43d271ac3d60486cb436204b738aa92d 3.76 kB [emitted]
[2023-11-28 05:22:26] service-icon2.png?69edbf313d9cb235404495fae432ca01 3.65 kB [emitted]
[2023-11-28 05:22:26] service-icon3.png?e18f15899ebc72414092b816d2d1736c 4 kB [emitted]
[2023-11-28 05:22:26] service-icon1.png?1f84489d40354db5e558b7dc0a64a3b7 3.23 kB [emitted]
[2023-11-28 05:22:26] fact3.png?4aaa8aee9db87291b60d969b334452c9 2.63 kB [emitted]
[2023-11-28 05:22:26] bg5.jpg?804f9c7b1570bacc2bdde630edef67f8 121 kB [emitted]
[2023-11-28 05:22:26] bg1.jpg?f73b2b0dc5960cf4912d63ad58143635 289 kB [emitted] [big]
[2023-11-28 05:22:26] bg.jpg?804293a81cc3197f9ced71ea410700c3 194 kB [emitted]
[2023-11-28 05:22:26] banner2.jpg?f885a0dbab8c59311cbc365cac6c87bd 210 kB [emitted]
[2023-11-28 05:22:26] slide-page3.jpg?66d5835f5c48f935b9c0c58138800f0e 180 kB [emitted]
[2023-11-28 05:22:26] slide-page4.jpg?8b4ee026018efc1401226c8e94c48a17 477 kB [emitted] [big]
[2023-11-28 05:22:26] service-center.jpg?794ef31b83b2133551a3d3fc2d4fabcf 53.7 kB [emitted]
[2023-11-28 05:22:26] service-icon4.png?1120fc8f71f76e42c4a16228755c7696 3.28 kB [emitted]
[2023-11-28 05:22:26] project1.jpg?6880dc7254be2264e991b7b34360eb23 207 kB [emitted]
[2023-11-28 05:22:26] project2.jpg?acd1debaa96f18c192b9376fc9285cbc 237 kB [emitted]
[2023-11-28 05:22:26] project3.jpg?6636c728ed8e8869be048e59dcecd774 264 kB [emitted] [big]
[2023-11-28 05:22:26] project4.jpg?b4f345807012eaa70e86b789b920c723 342 kB [emitted] [big]
[2023-11-28 05:22:26] project5.jpg?648bd7a6a7d0a1a995a8720bbc296b4d 134 kB [emitted]
[2023-11-28 05:22:26] project6.jpg?432a693797ae3b6ad83c991686cb74c1 192 kB [emitted]
[2023-11-28 05:22:26] news1.jpg?ef727f467176148fc8c64819ae101546 134 kB [emitted]
[2023-11-28 05:22:26] news2.jpg?a422b03b7d56e230cc44b20e067ace88 156 kB [emitted]
[2023-11-28 05:22:26] news3.jpg?6421eaf7ef543a09b79e12566a9381af 154 kB [emitted]
[2023-11-28 05:22:26] fact1.png?c4fb7997bf01e728803c136738fb1612 2.81 kB [emitted]
[2023-11-28 05:22:26] fact2.png?30ad417e780e02e9094b7ddfe240d604 1.86 kB [emitted]
[2023-11-28 05:22:26] testimonial1.png?5a00caacbdcfb0d9311a536621a5db87 126 kB [emitted]
[2023-11-28 05:22:26] testimonial2.png?56b155519b7d43a2ae2457ea7eeb783e 73 kB [emitted]
[2023-11-28 05:22:26] testimonial3.png?41deae3cad042a701bc52fbfc4220092 35.4 kB [emitted]
[2023-11-28 05:22:26] client1.png?e1cafa858e4c92b192b11edc9571ac33 183 kB [emitted]
[2023-11-28 05:22:26] client2.png?b60aa54dcf2874cd89b19d94ba0fd3a7 82 kB [emitted]
[2023-11-28 05:22:26] client3.png?483cd5da29e3158b46595977cc765d20 66 kB [emitted]
[2023-11-28 05:22:26] client4.png?6e4cecfa28da9c46d41fde985498e5fc 251 kB [emitted] [big]
[2023-11-28 05:22:26] client5.png?2d50b964b6876d256cbf3f129f8f5ccb 215 kB [emitted]
[2023-11-28 05:22:26] client6.png?42f3be0b50cf7908549d01287120e50d 82.4 kB [emitted]
[2023-11-28 05:22:26] client7.png?8c0ed02929303c1bf91a0054953d2eb9 160 kB [emitted]
[2023-11-28 05:22:26] client8.png?bac6ca1a9bd8451eb2c64d761d751ab6 175 kB [emitted]
[2023-11-28 05:22:26] client9.png?4e224bc926912b8c1ee34d410f0e402c 231 kB [emitted]
[2023-11-28 05:22:26] service1.jpg?7736540fee500dd6197286ca8ae8f87b 163 kB [emitted]
[2023-11-28 05:22:26] service2.jpg?58a3c8b346dfc428ea39886a7d433747 72.8 kB [emitted]
[2023-11-28 05:22:26] service3.jpg?607fb3fe71857105f78aabe9d234bda8 194 kB [emitted]
[2023-11-28 05:22:26] team6.jpg?18101299ee308da73f848c95b9c84a55 28.2 kB [emitted]
[2023-11-28 05:22:26] team5.jpg?642feee91e44367535e06118f65a3770 111 kB [emitted]
[2023-11-28 05:22:26] team4.jpg?2cb0f25d3f3e91e74607eb9ab1cc1547 86.8 kB [emitted]
[2023-11-28 05:22:26] team3.jpg?68a8946d0b821df6ccf7391699121453 71.3 kB [emitted]
[2023-11-28 05:22:26] team1.jpg?c35b478e55b7e69a6c79beb37a9eb4d0 107 kB [emitted]
[2023-11-28 05:22:26] website.png?3602a256c73366fbdf290d390d51a2c1 13.6 kB [emitted]
[2023-11-28 05:22:26] build.js 1.19 MB 0 [emitted] [big] main
[2023-11-28 05:22:26] build.js.map 7.58 MB 0 [emitted] main
[2023-11-28 05:22:26npm notice
[2023-11-28 05:22:26] npm notice New major version of npm available! 8.19.4 -> 10.2.4
[2023-11-28 05:22:26] npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.4>
[2023-11-28 05:22:26] npm notice Run `npm install -g npm@10.2.4` to update!
[2023-11-28 05:22:26] npm notice
[]
");
I’ve been making several changes and debugging certain situations such as the port, the startup commands and little by little I have advanced but this time I’m stuck and I don’t know why, I would appreciate your help :D
]]>app.domain.com/login
does not work (site can’t be reached) while app.domain.com:3000/login
works.
Here is my reverse proxy settings in the sites-available folder;
server {
server_name app.domain.com;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/app.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/app.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location /login {
proxy_pass http://localhost:3000/login;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
if ($host = app.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name app.domain.com;
return 404; # managed by Certbot
}
running sudo nginx -t
says test is successful so I am wondering what the problem could be.
the backend part builds and deploys successfully, but the frontend part builds successfully but fails at deployment.
here are the frontend deployment logs:
[2023-11-25 08:02:28] > my-app1@0.1.0 start
[2023-11-25 08:02:28] > export SET NODE_OPTIONS=--openssl-legacy-provider && react-scripts start
[2023-11-25 08:02:28]
[2023-11-25 08:02:33] ℹ 「wds」: Project is running at http://10.244.94.220/
[2023-11-25 08:02:33] ℹ 「wds」: webpack output is served from
[2023-11-25 08:02:33] ℹ 「wds」: Content not from webpack is served from /workspace/Front-end/public
[2023-11-25 08:02:33] ℹ 「wds」: 404s will fallback to /
[2023-11-25 08:02:33] Starting the development server...
[2023-11-25 08:02:33]
[2023-11-25 08:02:33] npm notice
[2023-11-25 08:02:33] npm notice New major version of npm available! 9.6.7 -> 10.2.4
[2023-11-25 08:02:33] npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.4>
[2023-11-25 08:02:33] npm notice Run `npm install -g npm@10.2.4` to update!
[2023-11-25 08:02:33] npm notice
and here’s the link to the code i am trying to deploy: github.com/learnacadman/mern-lms-app
any idea how to solve this? i cant find any.
Thanks.
]]>The build and the run part were successfull acccording to DigitalOcean, however when I try to access the App I get the error “Cannot GET /”. And in the headers I see Status Code 404: Not Found. So something is definitely not correctly configured.
Some information about my configuration:
a) Backend
For the backend I used the custom build command yarn build:digitalocean
that I’ve defined in my package.json
file in the api
folder as follows "scripts": {"build:digitalocean": "yarn cache clean && yarn install --production=false && rm -rf node_modules && yarn install --production --frozen-lockfile"},
and the run command for the back end is node index.js
.
b) Frontend
The frontend build command is yarn build
which is defined in the package.json
in the client
folder like the following: "scripts": {"build": "vite build",...},
and the run command is npx serve -s dist
with dist
as the folder name for the built files, specified in vite.config.js
as follows:
import { defineConfig } from 'vite'; import react from '@vitejs/plugin-react';
and export default defineConfig({plugins: [react()], build: { outDir: 'dist',},})
Runtime Logs
a) Backend
Server running on port 4000
b) Frontend
(node:18) [DEP0040] DeprecationWarning: The 'punycode' module is deprecated. Please use a userland alternative instead. (Use 'node --trace-deprecation ...' to show where the warning was created) INFO Accepting connections at http://localhost:8080
==> I get the same warning in development as well but the app runs completely fine locally despite this warning.
RUN IN DEV MODE
On my local machine I can easily run both backend and frontend with the following commands:
backend: yarn dev
frontend: nodemon index.js
In the App I also connect to the external Atlas MongoDB
, but I do not think that this is the issue here - but maybe it is. For sure I need to adapt certain settings in order to make it run all fine but I expected that at least I would get an error message that would help me further.
What could be the issue? Any help would be very much appreciated!
A question that just came into my mind: If I included my .env file with my ENV variables in the git repository (which I have…), will the Environment Variables
in the App Configuration on DigitalOcean overwrite these values if the keys are equal? I hope this is the case…
I am using Knex in my Node.js Express application. Here is my Knex config:
production: {
client: 'postgresql',
connection: {
connectionString: process.env.PSQL_CONNECTION_STRING,
ssl: {
ca: Buffer.from(process.env.CA_CERT_BASE_64 ?? '').toString('utf-8')
}
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: 'knex_migrations'
}
I have tried the following things:
After all of this, I get the following error:
Error: self signed certificate in certificate chain
By setting NODE_TLS_REJECT_UNAUTHORIZED=0 in my environmental variables, I am able to connect to my database. However, I would like to avoid doing that.
Does anyone have any insight on what can be done?
]]>During development I’ve faced with the next issue. When I write a row into database and then immediately (but after a promise is resolved of course) read the table, then I’ve got dataset without those new row (in most cases, but sometimes the dataset contains all expected rows).
But if I put some delay (600ms, for example), then the dataset contains contains all expected rows for all cases.
My question is: Does such behavior relate to some latency that is appeared because of copying data from primary node to all read-only nodes? If so, then how many time is needed for such copying? Or maybe, if the time is floating and depends on many factors, then how is it possible to measure this latency for my case?
Thanks for the help!
]]>I’m trying to create a function using NodeJs and SendGrid. I think I have the code correct, however, I’m getting an error on importing the SendGrid module.
Error [ERR_MODULE_NOT_FOUND]: Cannot find package ‘@sendgrid/mail’ imported from /tmp/index.mjs
Done the usual Googling and searching the community, but I can’t seem to find the solution.
Bring the import like this:
import * as sgMail from '@sendgrid/mail';
package.json looks like this:
{
"name": "sendgrid-sendemail-js-do",
"type": "module",
"main": "main.js",
"dependencies": {
"@sendgrid/mail": "^7.7.0",
"axios": "^1.5.1"
}
}
Managed to successfully get a different function working after working through issue, but this one seems to be wracking my brains!
Any pointers would be appreciated.
]]>─────────── app build ───────────╼
[2023-09-19 12:31:36] │ ---> Node.js Buildpack
[2023-09-19 12:31:36] │ ---> Installing toolbox
[2023-09-19 12:31:36] │ ---> - jq
[2023-09-19 12:31:37] │ ---> - yj
[2023-09-19 12:31:37] │ ---> Getting Node version
[2023-09-19 12:31:37] │ ---> Resolving Node version
[2023-09-19 12:31:39] │ ---> Downloading and extracting Node v16.20.2
[2023-09-19 12:31:43] │ ---> Parsing package.json
[2023-09-19 12:31:44] │ ---> No file to start server
[2023-09-19 12:31:44] │ ---> either use 'docker run' to start container or add index.js or server.js
[2023-09-19 12:31:44] │ Project contains package-lock.json, using npm
[2023-09-19 12:31:48] │ Using npm v8.19.4. To configure a different version of npm, set the engines.npm property in package.json.
[2023-09-19 12:31:48] │ See https://do.co/apps-buildpack-node for further instructions.
[2023-09-19 12:31:48] │ Installing node_modules using npm (from package-lock.json)
[2023-09-19 12:31:48] │ Running npm ci
[2023-09-19 12:31:48] │
[2023-09-19 12:32:12] │ npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
[2023-09-19 12:32:13] │ npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ > project@1.0.0 prepare
[2023-09-19 12:32:25] │ > husky install
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ sh: 1: husky: not found
[2023-09-19 12:32:25] │ npm notice
[2023-09-19 12:32:25] │ npm notice New major version of npm available! 8.19.4 -> 10.1.0
[2023-09-19 12:32:25] │ npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.1.0>
[2023-09-19 12:32:25] │ npm notice Run `npm install -g npm@10.1.0` to update!
[2023-09-19 12:32:25] │ npm notice
[2023-09-19 12:32:25] │ npm ERR! code 127
[2023-09-19 12:32:25] │ npm ERR! path /workspace
[2023-09-19 12:32:25] │ npm ERR! command failed
[2023-09-19 12:32:25] │ npm ERR! command sh -c -- husky install
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ npm ERR! A complete log of this run can be found in:
[2023-09-19 12:32:25] │ npm ERR! /home/apps/.npm/_logs/2023-09-19T12_31_48_953Z-debug-0.log
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ unable to invoke layer creator
[2023-09-19 12:32:25] │ installing node_modules: exit status 127
[2023-09-19 12:32:25] │ ERROR: failed to build: exit status 1
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ For documentation on the buildpacks used to build your app, please see:
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ Node.js v0.3.6 https://do.co/apps-buildpack-node
[2023-09-19 12:32:25] │
[2023-09-19 12:32:25] │ ✘ build failed
[]
]]>I always am getting this error: “Access to fetch at ‘https://nodejs.site.com/apicall’ from origin ‘https:// site.com’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.” Error 502
Nodejs server is running cos I can reach it using ip address (with http).
This is my nginx default config file:
server {
root /var/www/mysite.com/html/build;
index index.html index.htm index.nginx-debian.html;
server_name mysite.com;
location / {
try_files $uri $uri/ /index.html;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot}
server {
server_name nodejs.mysite.com;
location / {
proxy_pass https://localhost:8800; #whatever port your app runs on
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/nodejs.mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/nodejs.mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
error log from Nginx:
[error] 2034#2034: *146 SSL_do_handshake() failed (SSL: error:0A00010B:SSL routines::wrong version number) while SSL handshaking to upstream, client: 43.224.169.197, server: nodejs.site.com, request: "GET /apicall_id=3 HTTP/1.1", upstream: "https://127.0.0.1:8800/apicall_id=3", host: "nodejs.site.com", referrer: "http://localhost:3000/"
What I have tried already?
What is wrong? I hope someone can help me …
edit: status SSH
Loaded: loaded (/lib/systemd/system/ssh.service; disabled; preset: enabled)
Drop-In: /etc/systemd/system/ssh.service.d
└─00-socket.conf
Active: active (running) since Sun 2023-09-03 10:58:57 UTC; 2h 33min ago
TriggeredBy: ● ssh.socket
Docs: man:sshd(8)
man:sshd_config(5)
Process: 728 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
Main PID: 739 (sshd)
Tasks: 1 (limit: 994)
Memory: 8.7M
CPU: 612ms
CGroup: /system.slice/ssh.service
└─739 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"
Sep 03 13:15:16 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2557]: Invalid user admin from 139.59.78.11 port 42204
Sep 03 13:15:16 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2557]: Connection closed by invalid user admin 139.59.78.11 port 42204 [preauth]
Sep 03 13:24:43 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2664]: Accepted publickey for root from 43.224.169.197 port 22898 ssh2: RSA SHA256:5RBxNOVo/P7toxkvPlA>
Sep 03 13:24:43 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2664]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Sep 03 13:24:43 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2664]: pam_env(sshd:session): deprecated reading of user environment enabled
Sep 03 13:26:04 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2661]: fatal: Timeout before authentication for 218.92.0.108 port 40561
Sep 03 13:27:40 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2720]: Received disconnect from 61.177.172.179 port 42231:11: [preauth]
Sep 03 13:27:40 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2720]: Disconnected from authenticating user root 61.177.172.179 port 42231 [preauth]
Sep 03 13:30:02 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2727]: Invalid user from 64.62.197.5 port 13793
Sep 03 13:30:06 ubuntu-s-1vcpu-1gb-intel-nyc1-01 sshd[2727]: Connection closed by invalid user 64.62.197.5 port 13793 [preauth]
]]># node -v
v20.5.1
# npm -v
9.8.0
# live-server -v
live-server 1.2.2
I want to connect to the live-server server remotely. In my firewall, I opened port 8080 for both incoming and outgoing. When I run live-server, I’m able to connect to the server remotely.
However, my local browser doesn’t get live updates when I update files. I don’t see any errors in Chrome or from live-sever, I just don’t get the updates.
How can I get those live updates?
]]>I have been struggling 2 whole days already with this problem. I’ve set up a reactjs app, nodejs backend server and mysql database on the same droplet. I can visit the react site (www.xxx.com) without problems, my backend is also running. But I cannot connect frontend to backend, I always get “ERR_SSL_PROTOCOL_ERROR” (on my local windows machine, everything is working perfectly). It seems the backend server doesnt make a secure connection. When I do ip_address:8800?api-command, I get a response. So, it is working. I connect from FE to BE like this: “https://ip_address_droplet:8800”
I have to admit, I never used Nginx before, but this is my config file:
server {
root /var/www/sitename.com/html/build;
index index.html index.htm index.nginx-debian.html;
server_name sitename.com;
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/sitename.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/sitename.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
server_name nodejs.sitename.com;
location / {
proxy_pass http://localhost:8800; #whatever port your app runs on
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
if ($host = sitename.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name sitename.com;
return 404; # managed by Certbot
}
Can someone guide me into the right direction? What am I doing wrong?
Thx, Peter
]]>currently I’m working on the processing of files with ffmpeg. My first attempt was to deploy the ffmpeg stuff within an existing node.js docker container on the digitialocean app platform. The file processing uses much memory and cpu, what causes a crash and forces a restart of the app.
First of all, I don’t understand, why this should happen on app platform, because a docker container should only be allowed to use the designated memory and not just crash, if it uses 100% for more than 5 seconds. We can’t upgrade the plan from Basic 5$ dollars to 300$/month just because of that one feature.
Second, now I try to outsource this functionality inside of a function with ffmpeg-static + fluent-ffmpeg. But ‘doctl serverless deploy’ throws 413 payload too large. On my disk the zip file has 43 MB. I thought the limit is 48 MB? Can you assist me with that problem?
Thanks in advance.
]]>I’m trying to run a node-ffmpeg app in a ubuntu droplet, but it keeps crashing without giving me any reason.
ffmpeg-api@1.0.0 start
NODE_ENV=production node index.mjs
[info] use ffmpeg.wasm v0.11.6
[info] load ffmpeg-core
[info] loading ffmpeg-core
[info] ffmpeg-api listening at port 3000
[info] ffmpeg-core loaded
Killed
The app is somewhat a modified version of the following tutorial (https://www.digitalocean.com/community/tutorials/how-to-build-a-media-processing-api-in-node-js-with-express-and-ffmpeg-wasm). The only difference is that it also uses some mongodb
and filestack
node packages. It runs perfectly well locally and in the app-platform, however it just keeps crushing in a droplet. Older commits of the app, which follow the tutorial almost exactly, are no good either.
I can confirm node.js
and npm
are installed and running, and my droplet is able to run a node server. Also while trying to troubleshot it I installed ffmpeg
package using sudo apt-get install -y ffmpeg libgdiplus
. I don’t have rich experience in droplets, so it’s probably something small I’m missing. However at this point I don’t have any ideas what is it exactly.
Do you have any advise? Thank you!
]]>WebSocket connection to 'wss://localhost.com:8080/ws' failed:
I also get 404 errors for my login route which I’m assuming are due to the same ws issue.
POST https://localhost.com/record/login 404
Can anyone suggest a resolution?
]]># This lookmeup.io page can’t be found
No webpage was found for the web address: **https://lookmeup.io/profile**
HTTP ERROR 404
]]>I saw that there are 3 ways:
Basically, my question is: Between these methods, which consumes less resources, is cheaper and scales easily?
]]>I’m currently running an ##### Ubuntu 18.04 stack with Node.js [latest].
I want to be able to run the serve build
command as suggested in https://create-react-app.dev/docs/deployment/ for deploying a create-react-app. However, I am unsure if I’m able to globally install and run the npm package serve
and have it serve
the build directory for my app.
At current I just have the run command running npm start
which runs the local development create-react-app server.
Any ideas? Thanks
]]>multipart/form-data
via a POST
request from axos npm package, payload looks something like ::
Form Data
1. file:
(binary)
2. file:
(binary)
3. file:
(binary)
4. file:
(binary)
5. file:
(binary)
6. file:
(binary)
7. file:
(binary)
8. file:
(binary)
9. file:
(binary)
10. file:
(binary)
11. file:
(binary)
12. file:
(binary)
13. file:
(binary)
14. file:
(binary)
15. file:
(binary)
16. file:
(binary)
17. file:
(binary)
18. file:
(binary)
19. file:
(binary)
I have a DO Droplet 2 GB Memory / 50 GB Disk / LON1 - Ubuntu 22.10 x64
. Currently runs as a proxy server to another server. It runs Ubuntu, Nodejs (Express) server, very simple server ::
// include dependencies
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');
// proxy middleware options
/** @type {import('http-proxy-middleware/dist/types').Options} */
const options = {
target: 'http://example.com', // target host
changeOrigin: true, // needed for virtual hosted sites
ws: true, // proxy websockets
pathRewrite: {
'^/api/log-handler': '/' // rewrite path
},
router: {
// when request.headers.host == 'dev.localhost:3000',
// override target 'http://www.example.org' to 'http://localhost:8000'
'dev.localhost:3000': 'http://localhost:8000',
},
};
// create the proxy (without context)
const exampleProxy = createProxyMiddleware(options);
// mount `exampleProxy` in web server
const app = express();
app.use(express.json({ limit: '500mb' }));
app.use(express.urlencoded({ extended: false, limit: '500mb', parameterLimit: 500000 }));
app.use('/api', exampleProxy);
app.listen(3000);
You can see above, I have tried to increase the size of requests within express…
Also within my nginx config /etc/nginx/nginx.conf
I have added ::
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 500M;
client_body_buffer_size 128k;
keepalive_timeout 65;
tcp_nodelay on;
I’ve added the client_max_body_size
in the /etc/nginx/sites-available/default
and /etc/nginx/sites-available/example.com
files too just to make sure.
However, on upload i’m still getting this error in the browser window…
Status413
Request Entity Too Large
VersionHTTP/1.1
Transferred268 B (67 B size)
Referrer Policystrict-origin-when-cross-origin
Request PriorityHighest
In the response headers of the network tab I see this…
Connection:
keep-alive
Content-Length:
67
Content-Type:
text/html
Date:
Tue, 30 May 2023 17:10:33 GMT
Server:
nginx/1.22.0 (Ubuntu)
X-Powered-By:
Express
Since this server is a proxy to another server, I assume the error lies with the express server, since the response headers tell me the server is nginx etc and my server that this DO droplet forwards the request too isn’t an nginx server.
My question does DO droplets have a limit on http uploads that I might be hitting? Is the error coming from the nginx/nodejs server? Or is it coming from the server -> that the request gets forwarded to (from what I can see there’s no request to that server so the request doesn’t get there but I don’t know it could get eaten up before it gets to the console.log
outputs).
Thank you, any advice is super helpful too.
]]>“We encountered an error when trying to load your application and your page could not be served. Check the logs for your application in the App Platform dashboard.”
However I cannot see anything in the logs that indicated problems building and deploying the app.
The app runs fine on my local development environment.
]]>I won’t have a server browser, it will be a quick join button and the matchmaking server will spin up an instance of the game session and shut it down on conclusion.
Initially, I was intending to have multiple droplets on standby but this is a very manual approach. I understand I should be using the PaaS functionality instead.
Is anyone willing to share some insight into how this could/should be implemented?
]]>I’m using App Platform since 3 months (come from Heroku) and i have two strange behaviors :
<!DOCTYPE html>
<html>
<head>
<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">
<meta name=\"robots\" content=\"noindex\">
<style>body,html{height:100%;margin:0}body{display:flex;align-items:center;justify-content:center;flex-direction:column;-webkit-font-smoothing:antialiased;text-rendering:optimizeLegibility}p{text-align:center;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;color:#000;font-size:14px;margin-top:-50px}p.code{font-size:24px;font-weight:500;border-bottom:1px solid #e0e1e2;padding:0 20px 15px}p.text{margin:0}a,a:visited{color:#aaa}</style>
</head>
<body>
<p class=\"code\">
Error
</p>
<p class=\"text\">
We encountered an error when trying to load your application and your page could not be served. Check the logs for your application in the App Platform dashboard.
</p>
<div style=\"display:none;\">
<h1>
upstream_reset_before_response_started{connection_termination} (503 UC) </h1>
<p data-translate=\"connection_timed_out\">App Platform failed to forward this request to the application.</p>
</div>
</body>
</html>
It’s critical becauses it breaks somes requests. I have no logs for that too.
Thank you for your help.
]]>I’m using App Platform since 3 months (come from Heroku) and i have two strange behaviors :
<!DOCTYPE html>
<html>
<head>
<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">
<meta name=\"robots\" content=\"noindex\">
<style>body,html{height:100%;margin:0}body{display:flex;align-items:center;justify-content:center;flex-direction:column;-webkit-font-smoothing:antialiased;text-rendering:optimizeLegibility}p{text-align:center;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;color:#000;font-size:14px;margin-top:-50px}p.code{font-size:24px;font-weight:500;border-bottom:1px solid #e0e1e2;padding:0 20px 15px}p.text{margin:0}a,a:visited{color:#aaa}</style>
</head>
<body>
<p class=\"code\">
Error
</p>
<p class=\"text\">
We encountered an error when trying to load your application and your page could not be served. Check the logs for your application in the App Platform dashboard.
</p>
<div style=\"display:none;\">
<h1>
upstream_reset_before_response_started{connection_termination} (503 UC) </h1>
<p data-translate=\"connection_timed_out\">App Platform failed to forward this request to the application.</p>
</div>
</body>
</html>
It’s critical becauses it breaks somes requests. I have no logs for that too.
Thank you for your help.
]]>@protobuf-ts/runtime
npm module as a runtime dependency in my project. This runtime dependency works locally on my machine but when I deploy and then invoke my function I get greeted with Error: Cannot find module '@protobuf-ts/runtime'
, with the code MODULE_NOT_FOUND
. I find this surprising as I installed this dependency the same way I installed another dependency, @aws-sdk/client-s3
, which works as expected at runtime.
In an attempt to isolate the issue I have…
doctl serverless init --language ts test-do-project-0
.nodejs:default
to nodejs:18
in the project.yml
npm install @protobuf-ts/runtime
in the sample/hello function directory.@protobuf-ts/runtime
and used it in the main function.When I deploy and invoke the function it errors with:
2023-05-02T21:57:49.277554254Z stdout: ReferenceError: exports is not defined in ES module scope
2023-05-02T21:57:49.277556929Z stdout: at file:///tmp/index.mjs:2:23
2023-05-02T21:57:49.277577833Z stdout: at ModuleJob.run (node:internal/modules/esm/module_job:194:25)
I have tried fiddling with the tsconfig.json
module settings to resolve the issue to no avail, only succeeding in matching my original error (Cannot find module '@protobuf-ts/runtime'
). The function package.json
does not include "type": "module"
.
If I instead npm install @aws-sdk/client-s3
and import and use it in hello.ts
instead of @protobuf-ts/runtime
, then everything works as expected. This makes me wonder if it is a problem specific to the @protobuf-ts/runtime
npm module, but since it works locally on my machine, it must be something to do with how this dependency is handled in the DO Function Node runtime. Perhaps there is a difference in npm version?
I have read through both the Build Process and Functions Node.js JavaScript Runtime docs but can’t figure out what is going awry.
For reference and reproduction I have pushed the issue isolating Functions project to Github: https://github.com/willjvsmith/test-do-project-0
Thanks!
]]>Any idea what’s going on? I haven’t found anything from internet. Thanks for your insights in advance.
]]>I suspect maybe my droplet doesn’t have enough memory for this since react has so many dependencies, but want to make sure before I upgrade.
Node version: 18.7.0 NPM version: 8.18.0
]]>I want to develop a small application and thought of using DO serverless functions.
The function is very small and will be activated by an HTTP request, about once a month. Therefore I think creating a full App for will be an overkill…
What I need to achieve:
Will DO serverless functions be up to the task? Or do I still need to lunch a full App + Express for this?
Can I requier NPM pachages? Can I connect to the MySQL DB? Can I connect the MySQL DB using DO native connector like with the Apps or do I still needs to use some NPM package like sequelize.js?
Thanks!
]]>You can read more about that here :
https://docs.digitalocean.com/tutorials/droplet-cloudinit/
Anyway, I just wanted to create an example of how to Automate Nginx Reverse Proxy Setup for Node.js with Cloud-Init
]]>I keep receiving the below error when I want to create a new post. I see that the photo uploads successfully to my “public/images” but I can’t successfully create a new post.
Could it be an issue with my droplet? I have already increased file size in nginx, increased timeout in nginx, tried different browsers, I did a console.log for req.file and it exists, and I did a console.log of post and it does show that an object is created (referring to PostService line in API). I’m not sure where the error could be.
r.js:247 POST http://DO droplet/api/blog/post/newPost 502 (Bad Gateway) dispat
Post Model
const Schema = mongoose.Schema
let postSchema = new Schema({
user: { type: Schema.Types.ObjectId, ref: 'User' },
title: { type: String, minLength: 5, required: true },
category: { type: String, required: true },
photo: { type: String },
post: { type: String, minLength: 5, required: true },
timestamp: { type: String, default: () => moment().format("MMMM Do YYYY, h:mm:ss a") }
})
PostController
const Post = require('../models/postSchema')
const multer = require('multer');
const storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, 'public/images')
},
filename: function (req, file, cb) {
cb(null, Date.now() + "-" + file.originalname)
}
})
const imageFilter = function (req, file, cb) {
if (file.originalname.match(/\.(jpg|jpeg|png|gif)$/)) {
cb(null, true);
} else {
cb(new Error("OnlyImageFilesAllowed"), false);
}
}
class PostService {
_// create_
_static_ create(obj) {
let post = new Post(obj)
return post.save()
}
}
API
_/* Multer for Photos */_
const multer = require('multer');
const upload = multer({
storage: postsController.storage,
fileFilter: postsController.imageFilter
});
_// Create Posts_
router.post('/post/newPost', [upload.single('photo'),
check("title", "A title is required and must be at least 5 characters long")
.not()
.bail()
.isEmpty()
.bail()
.isLength({ min: 5 })
.bail()
.trim(),
check("category", "A category is required")
.not()
.bail()
.isEmpty()
.bail(),
check("post", "A post is required and must be at least 5 characters long")
.not()
.bail()
.isEmpty()
.bail()
.isLength({ min: 5 })
.bail()
.trim(),
], (req, res, next) => {
const errors = validationResult(req)
if (!errors.isEmpty()) {
_// If there are errors, delete any uploaded files_
if (req.file) {
fs.unlink(req.file.path, (err) => {
if (err) {
console.error(err)
}
})
}
return res.status(400).json({ errors: errors.array() })
} else {
const post = {
user: req.body.user,
title: req.body.title,
category: req.body.category,
post: req.body.post
};
if (req.file) {
post.photo = req.file.filename
}
PostService.create(post)
.then((post) => {
res.status(201);
res.json(post);
}).catch((err) => {
fs.unlink(req.file.path)
console.log(err)
});
}
});
FRONTEND
const CreatNewPost = () => {
const { user } = useSelector((state) => state.auth)
const { register, handleSubmit, reset, formState: { errors } } = useForm({
defaultValues: {
user: user ? user.currentUser.id : null,
title: "",
category: "",
post: "",
photo: ""
}
});
const [errorsServer, setErrorsServer] = useState("")
const submitNewPost = (data) => {
const formData = new FormData()
formData.append('user', data.user);
formData.append('title', data.title);
formData.append('category', data.category);
formData.append('post', data.post);
formData.append('photo', data.photo[0]);
(_async_ () => {
try {
await axios.post(`${process.env.REACT_APP_URL}/api/blog/post/newPost`, formData)
toast.success("View your profile for your new post!")
reset()
} catch (error) {
if (error.response) setErrorsServer(error.response.data.errors);
toast.error("Unable to create new post")
}
})();
if (errorsServer) setErrorsServer('')
}
return (
<Box
_display_="flex"
_direction_="column"
_alignItems_="center"
_justifyContent_="center">
<Grid
_container_
_width_={700}
_direction_="column"
_alignItems_="center"
_justifyContent_="center"
_className_="formPostContainer"
_sx_={{
p: 5,
boxShadow: 2,
borderRadius: 2,
'& button': { my: 3 },
}}
>
<h1 _className_="createTitle">New Post</h1>
<form _className_="postForm" _onSubmit_={handleSubmit(submitNewPost)}>
{errorsServer && errorsServer.map((error) => (
<div _className_="errorMsg" _key_={error.param}>
<div>{error.msg}</div>
</div>
))}
{errors.title && <Alert _severity_="error"><AlertTitle>Error</AlertTitle> <span>Titles must be at least 5 characters long</span></Alert>}
{errors.category && <Alert _severity_="error"><AlertTitle>Error</AlertTitle><span>A category must be selected.</span></Alert>}
{errors.post && <Alert _severity_="error"><AlertTitle>Error</AlertTitle><span>Posts must be at least 5 characters long</span></Alert>}
<TextField
_id_="title"
_type_="text"
_name_="title"
_label_="Title"
_placeholder_="Title"
_fullWidth_
_margin_="normal"
{...register("title", { required: true, minLength: 5 })}
/>
<FormControl _fullWidth_ _margin_="normal">
<InputLabel _htmlFor_="category" >Select...</InputLabel>
<Select
_name_="category"
_id_="category"
_variant_="outlined"
_defaultValue_=""
{...register("category", { required: true })}
>
<MenuItem _value_="physical">Physical</MenuItem>
<MenuItem _value_="creative">Creative</MenuItem>
<MenuItem _value_="mental">Mental</MenuItem>
<MenuItem _value_="food">Food</MenuItem>
<MenuItem _value_="collecting">Collecting</MenuItem>
<MenuItem _value_="games+puzzles">Games+Puzzles</MenuItem>
</Select>
</FormControl>
<label _htmlFor_="photo">Upload Photo:
<input
_type_="file"
_name_="photo"
_id_="photo"
_className_="photo"
{...register("photo")}
/>
</label>
<TextField
_id_="post"
_name_="post"
_label_="Blog Post"
_placeholder_="Write New Post"
_fullWidth_
_multiline_
_margin_="normal"
_rows_={20}
{...register("post", { required: true, minLength: 5 })}
/>
<Button _className_="submitFormBtn" _type_="submit" _variant_="contained" _color_="success" _endIcon_={<SendIcon />} _fullWidth_>Submit</Button>
</form>
</Grid>
</Box>
);
}
]]>I have been having issues with my MERN project lately.One is that I keep getting 404 errors and the other is now I see a 502 Error when I visit my DO IP address.
Today I was making changes to my signup and login code. I noticed that I would get 404 errors when trying to test it out and I would also get the same errors in POSTMAN.
I decided to visit my Explore page and would get a GET request 404 error. Which was strange because this wasn’t happening yesterday.
Then in POSTMAN I did a GET request to /api/blog/posts
and to api/users/users
and I didn’t get a list of posts or users. Instead I saw the index.html from React.
I went to my DO IP address and would see the same 404 error in my Explore page.I then got rid of all of my changes for the day and that didn’t help.
After a couple a minutes I went back to my ip address and now see a 502 Gateway Error. I went back to POSTMAN to do GET request to /api/blog/posts
and to api/users/users
and now I see 502 Gateway Error
I’m not sure what the issue. If it’s NGINX, my code, or something else. I can’ do anything with my project now.
Any help would be appreciated.
NGINX
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/*PROJECT NAME*/client/build/index.html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name *DO IP ADDRESS*;
location ^~ /assets/ {
gzip_static on;
expires 12h;
add_header Cache-Control public;
}
location / {
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3000;
client_max_body_size 50M;
}
location = /favicon.ico {
log_not_found off;
}
}
server.js
require('dotenv').config()
const path = require("path")
const express = require('express')
const mongoose = require('mongoose')
const cookieParser = require('cookie-parser')
const app = express()
const blogAPI = require('./api/blogApi')
const auth = require('./api/auth')
const users = require('./api/users')
_/* Connect to Database */_
mongoose.connect(process.env.DBURI, {
useNewUrlParser: true,
useUnifiedTopology: true
}).then(() => console.log('MongoDB connected...'))
.catch(err => console.log(err))
app.use(cookieParser())
_/* POST form handling */_
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
_/* Routing */_
app.use('/api/blog', blogAPI)
app.use('/api/auth', auth)
app.use('/api/users', users)
app.use('/public', express.static(path.join(__dirname, 'public')))
// Display React in production
app.use(express.static(path.join(__dirname, '/../client', 'build')))
app.get('*', function (req, res) {
res.sendFile(path.join(__dirname, '/../client', 'build', 'index.html'))
});
app.listen(process.env.PORT, () => console.log("Server started!"))
module.exports = app;
PostController
class PostService {
_// list_
_static_ list() {
return Post.find({})
.then((posts) => {
return posts
})
}
}
API
_/* CORS */_
const corsOptions = {
origin: `${process.env.FRONTEND}`,
methods: "GET,HEAD,PUT,PATCH,POST,DELETE",
credentials: true,
preflightContinue: false,
optionsSuccessStatus: 204
}
router.use(cors(corsOptions))
_/* Start of Routing */_
_/* Code associated with blog posts */_
_// list_
router.get('/posts', (req, res, next) => {
PostService.list()
.then((posts) => {
res.status(200)
res.json(posts)
})
})
interceptor.js
const axiosPrivate = axios.create({
baseURL: `${process.env.REACT_APP_URL}/api`
});
Explore Page
useEffect(() => {
const fetchPosts = _async_ () => {
try {
let response = await axiosPrivate.get('/blog/posts')
setPostsLoaded(response.data)
setLatestPosts(response.data.slice(-5))
setShowLatestPosts(true)
} catch (error) {
console.log(error);
}
}
fetchPosts()
}, [])
]]>Thanks for the very good tutorial. I am trying to return the data instead of saving it in a file. This, so I can call this script from another js file and use the data there.
I tried in index.js to add return scraperController(browserInstance)
and in the pageController.js use a return statement in the try section. This doesn’t work however.
Can you give me any hints?
]]>My sites-available default file:
server {
listen 80;
server_name domain1.com;
root /var/www/domain1/html/domain1.com/;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name domain2.co.in;
root /var/www/domain2/html/domain2.co.in/;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
When i open my domains, on both domains nginx shows # 502 Bad Gateway.
]]>Npm run build works fine for both domains inside /var/www/domain1 and var/www/domain2 and I think issue is with the sites-available vim default configuration.
Here is my configuration:
server { listen 80; server_name domain1.com;
location / {
proxy_pass http://127.0.0.1:3333;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
} server { listen 80; server_name domain2.co.in;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I’m running domain1 on port 3333 and domain2 on 3000 sothat port issue does not occur. Do i need to change anything in site-available file?
]]>The fetch call from the React UI is set up like this…
console.log('aiReqObj...', aiReqObj);
let aiResponse = await fetch('https://whateverappname.ondigitalocean.app/preview', {
method: "POST",
body: JSON.stringify(aiReqObj)
})
aiReqObj logs out just fine. On my Express server, the route is handled something like this…
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.post('/preview', async (req, res) => {
console.log(req.body);
}
When the request comes from the above fetch call in my react UI, this console.log(req.body) returns an empty object. What may I have overlooked in setting things up? Is this some sort of issue with a CORS header? Any ideas are greatly appreciated.
]]>My question is,
So, what’s the reason behind this issue?
]]>[2023-03-11 22:56:33] > pond-service@0.0.1 start
[2023-03-11 22:56:33] > node dist/app.js
[2023-03-11 22:56:33]
[2023-03-11 22:56:33] node:internal/modules/cjs/loader:1024
[2023-03-11 22:56:33] throw err;
[2023-03-11 22:56:33] ^
[2023-03-11 22:56:33]
[2023-03-11 22:56:33] Error: Cannot find module '/workspace/dist/app.js'
[2023-03-11 22:56:33] at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1021:15)
[2023-03-11 22:56:33] at Function.Module._load (node:internal/modules/cjs/loader:866:27)
[2023-03-11 22:56:33] at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
[2023-03-11 22:56:33] at node:internal/main/run_main_module:22:47 {
[2023-03-11 22:56:33] code: 'MODULE_NOT_FOUND',
[2023-03-11 22:56:33] requireStack: []
[2023-03-11 22:56:33] }
[]
Does anyone have a clue on this issue? What other information can I provide to help debug this?
]]>[2023-03-09 16:39:12] ╭──────────── git repo clone ───────────╼
[2023-03-09 16:39:12] │ › fetching app source code
[2023-03-09 16:39:12] │ => Selecting branch "deploy"
[2023-03-09 16:39:13] │ => Checking out commit "c670dcbcdada6094f6bd002a0e5c7c3f1ab7139a"
[2023-03-09 16:39:13] │
[2023-03-09 16:39:13] │ ✔ cloned repo to /.app_platform_workspace
[2023-03-09 16:39:13] ╰────────────────────────────────────────╼
[2023-03-09 16:39:13]
[2023-03-09 16:39:13] › applying source directory functions
[2023-03-09 16:39:13] ✔ using workspace root /.app_platform_workspace/functions
[2023-03-09 16:39:13]
[2023-03-09 16:39:16] › validating project.yml configuration
[2023-03-09 16:39:17] ✔ project.yml is valid
[2023-03-09 16:39:17] › preparing build environment
[2023-03-09 16:39:18]
[2023-03-09 16:39:18] ╭──────────── project build ───────────╼
[2023-03-09 16:39:19] │ Started running npm install && npm run build in /.app_platform_workspace/functions/lib
[2023-03-09 16:39:19] │ /bin/bash: line 1: npm: command not found
[2023-03-09 16:39:19] │
[2023-03-09 16:39:19] │ › Error: 'npm install' exited with code 127
[2023-03-09 16:39:19] │
[2023-03-09 16:39:19] │ command exited with code 1
[2023-03-09 16:39:19] │
[2023-03-09 16:39:20] │ ✘ build failed
The remote build process seems unable to find its own npm
binary. Any advice will be gratefully received.
tsc
script hangs forever.
I ran the build with --verbose
flag and here’s the output. Does anyone know why the tsc
command hang forever?
root@jcm-v2:/var/www/html/jcm-api-v2# tsc --verbose
error TS5093: Compiler option '--verbose' may only be used with '--build'.
root@jcm-v2:/var/www/html/jcm-api-v2# npm run build --verbose
npm verb cli /usr/bin/node /usr/bin/npm
npm info using npm@8.18.0
npm info using node@v18.7.0
npm timing npm:load:whichnode Completed in 0ms
npm timing config:load:defaults Completed in 5ms
npm timing config:load:file:/usr/share/nodejs/npm/npmrc Completed in 7ms
npm timing config:load:builtin Completed in 7ms
npm timing config:load:cli Completed in 5ms
npm timing config:load:env Completed in 1ms
npm timing config:load:file:/var/www/html/jcm-api-v2/.npmrc Completed in 1ms
npm timing config:load:project Completed in 7ms
npm timing config:load:file:/root/.npmrc Completed in 0ms
npm timing config:load:user Completed in 2ms
npm timing config:load:file:/etc/npmrc Completed in 0ms
npm timing config:load:global Completed in 0ms
npm timing config:load:validate Completed in 1ms
npm timing config:load:credentials Completed in 3ms
npm timing config:load:setEnvs Completed in 2ms
npm timing config:load Completed in 35ms
npm timing npm:load:configload Completed in 36ms
npm timing npm:load:mkdirpcache Completed in 2ms
npm timing npm:load:mkdirplogs Completed in 2ms
npm verb title npm run build
npm verb argv "run" "build" "--loglevel" "verbose"
npm timing npm:load:setTitle Completed in 4ms
npm timing config:load:flatten Completed in 7ms
npm timing npm:load:display Completed in 30ms
npm verb logfile logs-max:10 dir:/root/.npm/_logs
npm verb logfile /root/.npm/_logs/2023-03-07T03_47_00_121Z-debug-0.log
npm timing npm:load:logFile Completed in 20ms
npm timing npm:load:timers Completed in 0ms
npm timing npm:load:configScope Completed in 0ms
npm timing npm:load Completed in 101ms
(⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂⠂) ⠼ : timing npm:load Completed in 101ms
> jcm-api@1.0.0 build
> tsc
npm timing command:run Completed in 2000263ms
npm verb exit 0
npm timing npm Completed in 2000414ms
npm info ok
Thanks in advance!
]]>However I think the CA path and env var may be different for Nodejs app vs Laravel like in that post. Has anyone had any luck connecting to PlanetScale DB’s via mysql2
in a Nodejs app before?
Any help is greatly appreciated 🙏
]]>Thanks
]]>node index.js
but when I try to run pm2 start index.js
I end up with a failed deployment. I haven’t seen anywhere saying otherwise, but is it possible to use PM2 on the App Platform?
Error message: Component Issues app_name failed to deploy
]]>Node v16 is set to reach end of life in September, so it’s pretty undesirable to be pushing new v16 apps to prod right now. Also, you probably want at least several months of v18 support before then so that everyone running a node app on App Platform has time to upgrade.
There are also libraries coming out that only support v18 and above.
]]>Been banging my head against the wall for a few days trying to get this issue sorted, and I can’t quite figure it out.
Technology: NodeJS
Task: Create a simple backend server that will accept a form with file uploads, and that file will be upstreamed to DO Spaces bucket.
Issue: When testing API route with Postman or the NextJS frontend, request comes back as successful, file is uploaded to bucket, except that the file has a size of 0 bytes. File name is successfully passed, however.
import express from 'express';
import dotenv from 'dotenv';
import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
import formidable from 'formidable';
import fs from 'fs';
import cors from 'cors';
dotenv.config();
_const_ app = express();
app.use(express.json());
app.use(cors())
_const_ s3Client = new __S3Client__({
endpoint: process.env.DO_SPACES_URL || "https://nyc3.digitaloceanspaces.com",
forcePathStyle: false,
region: "nyc3",
credentials: {
accessKeyId: process.env.DO_SPACES_ID,
secretAccessKey: process.env.DO_SPACES_SECRET
}
});
_const_ uploadObject = _async_ ({ _Key_, _Body_, _ACL_, _Metadata_ }) _=>_ {
_const_ params = { Bucket: process.env.DO_SPACES_BUCKET, Key, Body, ACL, Metadata };
try {
_const_ data = await s3Client.send(new __PutObjectCommand__(params));
console.log(`Successfully uploaded object: ${params.Bucket}/${params.Key}`);
return data;
} catch (err) {
console.log("Error", err);
}
};
app.post('/submit-form', (_req_, _res_) _=>_ {
_const_ form = formidable()
form.parse(_req_, _async_ (_err_, _fields_, _files_) _=>_ {
if (!_files_) {
_res_.status(400).json({ error: 'No file uploaded' });
return
}
try {
_const_ file = _files_.file;
return uploadObject({
Key: file.originalFilename,
Body: __fs__.originalFilename,
ACL: 'public-read',
Metadata: { },
}),
_res_.status(201).send('File uploaded successfully');
}
catch (err) {
console.log(err)
_res_.status(500).json({ error: 'Failed to upload file' });
}
})
});
app.listen(3001, () _=>_ {
console.log('Server listening on port 3001.');
});
I’m really at a loss, and if anyone could shed some light on the issue I’d be eternally grateful.
Thanks in advance!
]]>However I have noticed that we are getting some intermittent downtime. At the same time I have noticed some strange traffic incoming from localhost. Here’s a snippet of the logs to give you an idea:
[Thu, 09 Feb 2023 17:42:32 GMT] 127.0.0.1 POST //admin/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.459 ms - 23
[Thu, 09 Feb 2023 17:42:32 GMT] 127.0.0.1 POST //laravel/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.298 ms - 23
[Thu, 09 Feb 2023 17:42:33 GMT] 127.0.0.1 POST //lib/phpunit/Util/PHP/eval-stdin.php 404 0.328 ms - 23
[Thu, 09 Feb 2023 17:42:33 GMT] 127.0.0.1 POST //new/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.311 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //protected/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.363 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //sites/all/libraries/mailchimp/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 3.516 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //wp-content/plugins/cloudflare/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.410 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //wp-content/plugins/dzs-videogallery/class_parts/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.310 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //wp-content/plugins/jekyll-exporter/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.294 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //wp-content/plugins/mm-plugin/inc/vendors/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.307 ms - 23
[Thu, 09 Feb 2023 17:42:34 GMT] 127.0.0.1 POST //www/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 0.335 ms - 23
[Thu, 09 Feb 2023 17:56:53 GMT] 127.0.0.1 GET / 404 3.443 ms - 23
[Thu, 09 Feb 2023 18:40:21 GMT] 127.0.0.1 GET /_ignition/execute-solution 404 0.325 ms - 23
[Thu, 09 Feb 2023 19:20:38 GMT] 127.0.0.1 GET /apis/apps/v1/namespaces/kube-system/daemonsets 404 0.401 ms - 23
[Thu, 09 Feb 2023 19:55:42 GMT] 127.0.0.1 GET /robots.txt 404 0.347 ms - 23
[Thu, 09 Feb 2023 20:20:40 GMT] 127.0.0.1 GET /metrics 404 0.495 ms - 23
[Thu, 09 Feb 2023 20:20:40 GMT] 127.0.0.1 GET /v2/ 404 0.499 ms - 23
[Thu, 09 Feb 2023 20:26:01 GMT] 127.0.0.1 GET / 404 0.457 ms - 23
[Thu, 09 Feb 2023 20:26:01 GMT] 127.0.0.1 GET /metrics 404 0.388 ms - 23
[Thu, 09 Feb 2023 21:26:57 GMT] 127.0.0.1 GET /favicon.ico 404 0.337 ms - 23
[Thu, 09 Feb 2023 21:55:57 GMT] 127.0.0.1 GET /bootstrap-2.min.js 404 0.338 ms - 23
[Thu, 09 Feb 2023 21:55:59 GMT] 127.0.0.1 GET /api/x 404 0.420 ms - 23
[Thu, 09 Feb 2023 22:16:45 GMT] 127.0.0.1 GET /.env 404 0.379 ms - 23
[Thu, 09 Feb 2023 22:16:45 GMT] 127.0.0.1 POST / 404 1.353 ms - 23
[Fri, 10 Feb 2023 03:57:01 GMT] 127.0.0.1 GET /_asterisk/ 404 0.435 ms - 23
[Fri, 10 Feb 2023 04:44:29 GMT] 127.0.0.1 GET /ab2g 404 0.370 ms - 23
[Fri, 10 Feb 2023 04:44:30 GMT] 127.0.0.1 GET /ab2h 404 0.506 ms - 23
[Fri, 10 Feb 2023 05:10:15 GMT] 127.0.0.1 GET /dns-query?name=dnsscan.shadowserver.org&type=A 404 0.563 ms - 23
[Fri, 10 Feb 2023 05:12:55 GMT] 127.0.0.1 GET /.git/config 404 0.312 ms - 23
[Fri, 10 Feb 2023 05:21:44 GMT] 127.0.0.1 GET / 404 0.369 ms - 23
[Fri, 10 Feb 2023 08:46:09 GMT] 127.0.0.1 GET /owa/auth/logon.aspx?url=https%3a%2f%2f1%2fecp%2f 404 0.338 ms - 23
[Fri, 10 Feb 2023 09:08:23 GMT] 127.0.0.1 GET /favicon.ico 404 0.637 ms - 23
[Fri, 10 Feb 2023 09:08:24 GMT] 127.0.0.1 GET /robots.txt 404 0.319 ms - 23
[Fri, 10 Feb 2023 09:08:26 GMT] 127.0.0.1 GET /.well-known/security.txt 404 0.437 ms - 23
[Fri, 10 Feb 2023 09:53:51 GMT] 127.0.0.1 GET /showLogin.cc 404 0.298 ms - 23
[Fri, 10 Feb 2023 10:18:05 GMT] 127.0.0.1 GET /autodiscover/autodiscover.json?@zdi/Powershell 404 0.270 ms - 23
[Fri, 10 Feb 2023 15:48:41 GMT] 127.0.0.1 GET /?XDEBUG_SESSION_START=phpstorm 404 0.302 ms - 23
[Fri, 10 Feb 2023 16:26:28 GMT] 127.0.0.1 GET /actuator/health 404 0.361 ms - 23
To be clear the IP addresses of external traffic are showing up as expected (ie most traffic does not come from 127.0.0.1), and the app is a REST API, so I’m not sure where these requests are coming from.
As you can see the API is responding with a 404 request so I’m not really sure how much impact it’s having on the app, and even if it has anything to do with the intermittent downtime we’re experiencing.
So I guess my questions are: why would my app (which is running on a port on localhost) be receiving incoming traffic from localhost? Should I blocklist 127.0.0.1 from my app? Where would these requests be coming from and why would they know to send requests to a specific port on localhost?
]]>When you run a Node.js program on a system with multiple CPUs, it creates a process that uses only a single CPU to execute by default. Since Node.js uses a single thread to execute your JavaScript code, all the requests to the application have to be handled by that thread running on a single CPU. If the application has CPU-intensive tasks, the operating system has to schedule them to share a single CPU until completion. That can result in a single process getting overwhelmed if it receives too many requests, which can slow down the performance. If the process crashes, users won’t be able to access your application.
As a solution, Node.js introduced the cluster
module, which creates multiple copies of the same application on the same machine and has them running at the same time. It also comes with a load balancer that evenly distributes the load among the processes using the round-robin algorithm. If a single instance crashes, users can be served by the remaining processes that are still running. The application’s performance significantly improves because the load is shared among multiple processes evenly, preventing a single instance from being overwhelmed.
In this tutorial, you will scale a Node.js application using the cluster
module on a machine with four or more CPUs. You’ll create an application that does not use clustering, then modify the app to use clustering. You’ll also use the pm2
module to scale the application across multiple CPUs. You’ll use a load testing tool to compare the performance of the app that uses clustering and the one that doesn’t, as well as assess the pm2
module.
To follow this tutorial, you will need the following:
In this step, you will create the directory for the project and download dependencies for the application you will build later in this tutorial. In Step 2, you’ll build the application using Express. You’ll then scale it in Step 3 to multiple CPUs with the built-in node-cluster
module, which you’ll measure with the loadtest
package in Step 4. From there, you’ll scale it with the pm2
package and measure it again in Step 5.
To get started, create a directory. You can call it the cluster_demo
or any directory name you prefer:
- mkdir cluster_demo
Next, move into the directory:
- cd cluster_demo
Then, initialize the project, which will also create a package.json
file:
- npm init -y
The -y
option tells NPM to accept all the default options.
When the command runs, it will yield output matching the following:
OutputWrote to /home/sammy/cluster_demo/package.json:
{
"name": "cluster_demo",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Note these properties that are aligned with your specific project:
name
: the name of the npm package.version
: your package’s version number.main
: the entry point of your project.To learn more about other properties, you can review the package.json
section of NPM’s documentation.
Next, open the package.json
file with your preferred editor (this tutorial will use nano
):
- nano package.json
In your package.json
file, add the highlighted text to enable support for ES modules when importing packages:
{
...
"author": "",
"license": "ISC",
"type": "module"
}
Save and close the file with CTRL+X
.
Next, you will download the following packages:
express
: a framework for building web applications in Node.js.loadtest
: a load testing tool, useful for generating traffic to an app to measure its performance.pm2
: a tool that automatically scales an app to multiple CPUs.Run the following command to download the Express package:
- npm install express
Next, run the command to download the loadtest
and pm2
packages globally:
- npm install -g loadtest pm2
Now that you’ve installed the necessary dependencies, you will create an application that does not use clustering.
In this step, you will create a sample program containing a single route that will start a CPU-intensive task upon each user’s visit. The program will not use the cluster
module so that you can access the performance implications of running a single instance of an app on one CPU. You’ll compare this approach against the performance of the cluster
module later in the tutorial.
Using nano
or your favorite text editor, create the index.js
file:
- nano index.js
In your index.js
file, add the following lines to import and instantiate Express:
import express from "express";
const port = 3000;
const app = express();
console.log(`worker pid=${process.pid}`);
In the first line, you import the express
package. In the second line, you set the port
variable to port 3000
, which the application’s server will listen on. Next, you set the app
variable to an instance of Express. After that, you log the process ID of the application’s process in the console using the built-in process
module.
Next, add these lines to define the route /heavy
, which will contain a CPU-bound loop:
...
app.get("/heavy", (req, res) => {
let total = 0;
for (let i = 0; i < 5_000_000; i++) {
total++;
}
res.send(`The result of the CPU intensive task is ${total}\n`);
});
In the /heavy
route, you define a loop that increments the total
variable 5 million times. You then send a response containing the value in the total
variable using the res.send()
method. While the example of the CPU-bound task is arbitrary, it demonstrates CPU-bound tasks without adding complexity. You can also use other names for the route, but this tutorial uses /heavy
to indicate a heavy performance task.
Next, call the listen()
method of the Express module to have the server listening on port 3000
stored in the port
variable:
...
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
The complete file will match the following:
import express from "express";
const port = 3000;
const app = express();
console.log(`worker pid=${process.pid}`);
app.get("/heavy", (req, res) => {
let total = 0;
for (let i = 0; i < 5_000_000; i++) {
total++;
}
res.send(`The result of the CPU intensive task is ${total}\n`);
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
When you’ve finished adding your code, save and exit your file. Then run the file using the node
command:
- node index.js
When you run the command, the output will match the following:
Outputworker pid=11023
App listening on port 3000
The output states the process ID of the process running and a message confirming that the server is listening on port 3000
.
To test if the application is working, open another terminal and run the following command:
- curl http://localhost:3000/heavy
Note: If you are following this tutorial on a remote server, open another terminal, then enter the following command:
- ssh -L 3000:localhost:3000 your_non_root_user@your_server_ip
Upon connecting, enter the following command to send a request to the app using curl
:
- curl http://localhost:3000/heavy
The output will match the following:
OutputThe result of the CPU intensive task is 5000000
The output provides the result from the CPU-intensive calculation.
At this point, you can stop the server with CTRL+C
.
When you run the index.js
file with the node
command, the operating system (OS) creates a process. A process is an abstraction the operating system makes for a running program. The OS allocates memory for the program and creates an entry in a process list containing all OS processes. That entry is a process ID.
The program binary is then located and loaded into the memory allocated to the process. From there, it starts executing. As it runs, it has no awareness of other processes in the system, and anything that happens in the process does not affect other processes.
Since you have a single process for the Node.js application running on a server with multiple CPUs, it will receive and handle all incoming requests. In this diagram, all the incoming requests are directed to the process running on a single CPU while the other CPUs stay idle:
Now that you have created an app without using the cluster
module, you will use the cluster
module to scale the application to use multiple CPUs next.
In this step, you will add the cluster
module to create multiple instances of the same program to handle more load and improve performance. When you run processes with the cluster
module, you can have multiple processes running on each CPU on your machine:
In this diagram, the requests go through the load balancer in the primary process, which then uses the round-robin algorithm to distribute the requests among the processes.
You’ll now add the cluster
module. In your terminal, create the primary.js
file:
- nano primary.js
In your primary.js
file, add the following lines to import dependencies:
import cluster from "cluster";
import os from "os";
import { dirname } from "path";
import { fileURLToPath } from "url";
const __dirname = dirname(fileURLToPath(import.meta.url));
In the first two lines, you import the cluster
and os
modules. In the following two lines, you import dirname
and fileURLToPath
, which you use to set the __dirname
variable value to the absolute path of the directory where the index.js
file is executing. These imports are necessary because the __dirname
is not defined when using ES modules and is only defined by default in CommonJS modules.
Next, add the following code to reference the index.js
file:
...
const cpuCount = os.cpus().length;
console.log(`The total number of CPUs is ${cpuCount}`);
console.log(`Primary pid=${process.pid}`);
cluster.setupPrimary({
exec: __dirname + "/index.js",
});
First, you set the cpuCount
variable to the number of CPUs in your machine, which should be four or higher. Next, you log the number of CPUs in the console. Then after, you log the process ID of the primary process, which is the one that will receive all the requests, and use a load balancer to distribute them among worker processes.
Following that, you reference the index.js
file using the setupPrimary()
method of the cluster
module so that it will be executed in each worker process spawned.
Next, add the following code to create the processes:
...
for (let i = 0; i < cpuCount; i++) {
cluster.fork();
}
cluster.on("exit", (worker, code, signal) => {
console.log(`worker ${worker.process.pid} has been killed`);
console.log("Starting another worker");
cluster.fork();
});
The loop iterates as many times as the value in the cpuCount
and calls the fork()
method of the cluster
module during each iteration. You attach the exit
event using the on()
method of the cluster
module to listen when a process has emitted the exit
event, which is usually when the process dies. When the exit
event is triggered, you log the process ID of the worker that has died and then invoke the fork()
method to create a new worker process to replace the dead process.
Your complete code will now match the following:
import cluster from "cluster";
import os from "os";
import { dirname } from "path";
import { fileURLToPath } from "url";
const __dirname = dirname(fileURLToPath(import.meta.url));
const cpuCount = os.cpus().length;
console.log(`The total number of CPUs is ${cpuCount}`);
console.log(`Primary pid=${process.pid}`);
cluster.setupPrimary({
exec: __dirname + "/index.js",
});
for (let i = 0; i < cpuCount; i++) {
cluster.fork();
}
cluster.on("exit", (worker, code, signal) => {
console.log(`worker ${worker.process.pid} has been killed`);
console.log("Starting another worker");
cluster.fork();
});
Once you have finished adding the lines, save and exit your file.
Next, run the file:
- node primary.js
The output will closely match the following (your process IDs and order of information may differ):
OutputThe total number of CPUs is 4
Primary pid=7341
worker pid=7353
worker pid=7354
worker pid=7360
App listening on port 3000
App listening on port 3000
App listening on port 3000
worker pid=7352
App listening on port 3000
The output will indicate four CPUs, one primary process that includes a load balancer, and four worker processes listening on port 3000
.
Next, return to the second terminal, then send a request to the /heavy
route:
- curl http://localhost:3000/heavy
The output confirms the program is working:
OutputThe result of the CPU intensive task is 5000000
You can stop the server now.
At this point, you will have four processes running on all the CPUs on your machine:
With clustering added to the application, you can compare the program performances for the one using the cluster
module and the one without the cluster
module.
In this step, you will use the loadtest
package to generate traffic against the two programs you’ve built. You’ll compare the performance of the primary.js
program which uses the cluster
module with that of the index.js
program which does not use clustering. You will notice that the program using the cluster
module performs faster and can handle more requests within a specific time than the program that doesn’t use clustering.
First, you will measure the performance of the index.js
file, which doesn’t use the cluster
module and only runs on a single instance.
In your first terminal, run the index.js
file to start the server:
- node index.js
You’ll receive an output that the app is running:
Outputworker pid=7731
App listening on port 3000
Next, return to your second terminal to use the loadtest
package to send requests to the server:
- loadtest -n 1200 -c 200 -k http://localhost:3000/heavy
The -n
option accepts the number of requests the package should send, which is 1200
requests here. The -c
option accepts the number of requests that should be sent simultaneously to the server.
Once the requests have been sent, the package will return output similar to the following:
OutputRequests: 0 (0%), requests per second: 0, mean latency: 0 ms
Requests: 430 (36%), requests per second: 87, mean latency: 1815.1 ms
Requests: 879 (73%), requests per second: 90, mean latency: 2230.5 ms
Target URL: http://localhost:3000/heavy
Max requests: 1200
Concurrency level: 200
Agent: keepalive
Completed requests: 1200
Total errors: 0
Total time: 13.712728601 s
Requests per second: 88
Mean latency: 2085.1 ms
Percentage of the requests served within a certain time
50% 2234 ms
90% 2340 ms
95% 2385 ms
99% 2406 ms
100% 2413 ms (longest request)
From this output, take note of the following metrics:
Total time
measures how long it took for all the requests to be served. In this output, it took just over 13 seconds to serve all 1200
requests.Requests per second
measures the number of requests the server can handle per second. In this output, the server handles 88
requests per second.Mean latency
measures the time it took to send a request and get a response, which is 2085.1 ms
in the sample output.These metrics will vary depending on your network or processor speed, but they will be close to these examples.
Now that you have measured the performance of the index.js
file, you can stop the server.
Next, you will measure the performance of the primary.js
file, which uses the cluster
module.
To do that, return to the first terminal and rerun the primary.js
file:
- node primary.js
You’ll receive a response with the same information as earlier:
OutputThe total number of CPUs is 4
Primary pid=7841
worker pid=7852
App listening on port 3000
worker pid=7854
App listening on port 3000
worker pid=7853
worker pid=7860
App listening on port 3000
App listening on port 3000
In the second terminal, run the loadtest
command again:
- loadtest -n 1200 -c 200 -k http://localhost:3000/heavy
When it finishes, you’ll receive a similar output (it can differ based on the number of CPUs on your system):
OutputRequests: 0 (0%), requests per second: 0, mean latency: 0 ms
Target URL: http://localhost:3000/heavy
Max requests: 1200
Concurrency level: 200
Agent: keepalive
Completed requests: 1200
Total errors: 0
Total time: 3.412741962 s
Requests per second: 352
Mean latency: 514.2 ms
Percentage of the requests served within a certain time
50% 194 ms
90% 2521 ms
95% 2699 ms
99% 2710 ms
100% 2759 ms (longest request)
The output for the primary.js
app, which is running with the cluster
module, indicates that the total time is down to 3 seconds from 13 seconds in the program that doesn’t use clustering. The number of requests the server can handle per second has tripled to 352
from the previous 88
, which means that your server can take a huge load. Another important metric is the mean latency, which has significantly dropped from 2085.1 ms
to 514.2 ms
.
This response confirms that the scaling has worked and that your application can handle more requests in a short time without delays. If you upgrade your machine to have more CPUs, the app will automatically scale to the number of CPUs and improve performance further.
As a reminder, the metrics in your terminal output will differ because of your network and processor speed. The total time and the mean latency will drop significantly, and the total time will increase rapidly.
Now that you have made the comparison and noted that the app performs better with the cluster
module, you can stop the server. In the next step, you will use pm2
in place of the cluster
module.
pm2
for ClusteringSo far, you have used the cluster
module to create worker processes according to the number of CPUs on your machine. You have also added the ability to restart a worker process when it dies. In this step, you will set up an alternative to automate scaling your app by using the pm2
process manager, which is built upon the cluster
module. This process manager contains a load balancer and can automatically create as many worker processes as CPUs on your machine. It also allows you to monitor the processes and can spawn a new worker process automatically if one dies.
To use it, you need to run the pm2
package with the file you need to scale, which is the index.js
file in this tutorial.
In your initial terminal, start the pm2
cluster with the following command:
- pm2 start index.js -i 0
The -i
option accepts the number of worker processes you want pm2
to create. If you pass the argument 0
, pm2
will automatically create as many worker processes as there are CPUs on your machine.
Upon running the command, pm2
will show you more details about the worker processes:
Output...
[PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/sammy/cluster_demo/index.js in cluster_mode (0 instance)
[PM2] Done.
┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ index │ default │ 1.0.0 │ cluster │ 7932 │ 0s │ 0 │ online │ 0% │ 54.5mb │ nod… │ disabled │
│ 1 │ index │ default │ 1.0.0 │ cluster │ 7939 │ 0s │ 0 │ online │ 0% │ 50.9mb │ nod… │ disabled │
│ 2 │ index │ default │ 1.0.0 │ cluster │ 7946 │ 0s │ 0 │ online │ 0% │ 51.3mb │ nod… │ disabled │
│ 3 │ index │ default │ 1.0.0 │ cluster │ 7953 │ 0s │ 0 │ online │ 0% │ 47.4mb │ nod… │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
The table contains each worker’s process ID, status, CPU utilization, and memory consumption, which you can use to understand how the processes behave.
When starting the cluster with pm2
, the package runs in the background and will automatically restart even when you reboot your system.
If you want to read the logs from the worker processes, you can use the following command:
- pm2 logs
You’ll receive an output of the logs:
Output[TAILING] Tailing last 15 lines for [all] processes (change the value with --lines option)
/home/sammy/.pm2/pm2.log last 15 lines:
...
PM2 | 2022-12-25T17:48:37: PM2 log: App [index:3] starting in -cluster mode-
PM2 | 2022-12-25T17:48:37: PM2 log: App [index:3] online
/home/sammy/.pm2/logs/index-error.log last 15 lines:
/home/sammy/.pm2/logs/index-out.log last 15 lines:
0|index | worker pid=7932
0|index | App listening on port 3000
0|index | worker pid=7953
0|index | App listening on port 3000
0|index | worker pid=7946
0|index | worker pid=7939
0|index | App listening on port 3000
0|index | App listening on port 3000
In the last eight lines, the log provides the output from each of the four running worker processes, including the process ID and the port number 3000
. This output confirms that all the processes are running.
You can also check the status of the processes using the following command:
- pm2 ls
The output will match the following table:
Output┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ index │ default │ 1.0.0 │ cluster │ 7932 │ 5m │ 0 │ online │ 0% │ 56.6mb │ nod… │ disabled │
│ 1 │ index │ default │ 1.0.0 │ cluster │ 7939 │ 5m │ 0 │ online │ 0% │ 55.7mb │ nod… │ disabled │
│ 2 │ index │ default │ 1.0.0 │ cluster │ 7946 │ 5m │ 0 │ online │ 0% │ 56.5mb │ nod… │ disabled │
│ 3 │ index │ default │ 1.0.0 │ cluster │ 7953 │ 5m │ 0 │ online │ 0% │ 55.9mb │ nod… │ disabled │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
Now that the cluster is running, enter the following command in the same terminal to test its performance:
- loadtest -n 1200 -c 200 -k http://localhost:3000/heavy
The output will closely match the following:
OutputRequests: 0 (0%), requests per second: 0, mean latency: 0 ms
Target URL: http://localhost:3000/heavy
Max requests: 1200
Concurrency level: 200
Agent: keepalive
Completed requests: 1200
Total errors: 0
Total time: 3.771868785 s
Requests per second: 318
Mean latency: 574.4 ms
Percentage of the requests served within a certain time
50% 216 ms
90% 2859 ms
95% 3016 ms
99% 3028 ms
100% 3084 ms (longest request)
The Total time
, Requests per second
, and Mean latency
are close to the metrics generated when you used the cluster
module. This alignment demonstrates that the pm2
scaling works similarly.
To improve your workflow with pm2
, you can generate a configuration file to pass configuration settings for your application. This approach will allow you to start or restart the cluster without passing options.
To use the config file, delete the current cluster:
- pm2 delete index.js
You’ll receive a response that it is gone:
Output[PM2] Applying action deleteProcessId on app [index.js](ids: [ 0, 1, 2, 3 ])
[PM2] [index](2) ✓
[PM2] [index](1) ✓
[PM2] [index](0) ✓
[PM2] [index](3) ✓
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
Next, generate a configuration file:
- pm2 ecosystem
The output will confirm that the file has been generated:
OutputFile /home/sammy/cluster_demo/ecosystem.config.js generated
Rename .js
to .cjs
to enable support for ES modules:
- mv ecosystem.config.js ecosystem.config.cjs
Using your editor, open the configuration file:
- nano ecosystem.config.cjs
In your ecosystem.config.cjs
file, add the highlighted code below:
module.exports = {
apps : [{
script: 'index.js',
watch: '.',
name: "cluster_app",
instances: 0,
exec_mode: "cluster",
}, {
script: './service-worker/',
watch: ['./service-worker']
}],
deploy : {
production : {
user : 'SSH_USERNAME',
host : 'SSH_HOSTMACHINE',
ref : 'origin/master',
repo : 'GIT_REPOSITORY',
path : 'DESTINATION_PATH',
'pre-deploy-local': '',
'post-deploy' : 'npm install && pm2 reload ecosystem.config.cjs --env production',
'pre-setup': ''
}
}
};
The script
option accepts the file that will run in each process that the pm2
package will spawn. The name
property accepts any name that can identify the cluster, which can help if you need to stop, restart, or perform other actions. The instances
property accepts the number of instances you want. Setting instances
to 0
will make pm2
spawn as many processes as there are CPUs. The exec_mode
accepts the cluster
option, which tells pm2
to run in a cluster.
When you have finished, save and close your file.
To start the cluster, run the following command:
- pm2 start ecosystem.config.cjs
You’ll receive the following response:
Output[PM2][WARN] Applications cluster_app, service-worker not running, starting...
[PM2][ERROR] Error: Script not found: /home/node-user/cluster_demo/service-worker
[PM2] App [cluster_app] launched (4 instances)
The last line confirms that 4
processes are running. Because you haven’t created a service-worker
script in this tutorial, you can ignore the error about the service-worker
not being found.
To confirm that the cluster is operating, check the status:
- pm2 ls
You’ll receive a response that confirms four processes are running:
Output┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ cluster_app │ cluster │ 0 │ online │ 0% │ 56.9mb │
│ 1 │ cluster_app │ cluster │ 0 │ online │ 0% │ 57.6mb │
│ 2 │ cluster_app │ cluster │ 0 │ online │ 0% │ 55.9mb │
│ 3 │ cluster_app │ cluster │ 0 │ online │ 0% │ 55.9mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
If you want to restart the cluster, you can use the app name you defined in the ecosystem.config.cjs
file, which in this case is cluster_app
:
- pm2 restart cluster_app
The cluster will restart:
OutputUse --update-env to update environment variables
[PM2] Applying action restartProcessId on app [cluster_app](ids: [ 0, 1, 2, 3 ])
[PM2] [cluster_app](0) ✓
[PM2] [cluster_app](1) ✓
[PM2] [cluster_app](2) ✓
[PM2] [cluster_app](3) ✓
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ cluster_app │ cluster │ 1 │ online │ 0% │ 48.0mb │
│ 1 │ cluster_app │ cluster │ 1 │ online │ 0% │ 47.9mb │
│ 2 │ cluster_app │ cluster │ 1 │ online │ 0% │ 38.8mb │
│ 3 │ cluster_app │ cluster │ 1 │ online │ 0% │ 31.5mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
To continue managing your cluster, you can run the following commands:
Command | Description |
---|---|
pm2 start app_name |
Starts the cluster |
pm2 restart app_name |
Kills the cluster and starts it again |
pm2 reload app_name |
Restarts the cluster without downtime |
pm2 stop app_name |
Stops the cluster |
pm2 delete app_name |
Deletes the cluster |
You can now scale your application using the pm2
module and the cluster
module.
In this tutorial, you scaled your application using the cluster
module. First, you created a program that does not use the cluster
module. You then created a program that uses the cluster
module to scale the app to multiple CPUs on your machine. After that, you compared the performance between the app that uses the cluster
module and the one that doesn’t. Finally, you used the pm2
package as an alternative to the cluster
module to scale the app across multiple CPUs.
To progress further, you can visit the cluster
module documentation page to learn more about the module.
If you want to continue using pm2
, you can refer to the PM2 - Process Management documentation. You can also try our tutorial using pm2
for How To Set Up a Node.js Application for Production on Ubuntu 22.04.
Node.js also ships with the worker_threads
module that allows you to split CPU-intensive tasks among worker threads so that they can finish fast. Try our tutorial on How To Use Multithreading in Node.js. You can also optimize CPU-bound tasks in the frontend using dedicated web workers, which you can accomplish by following the How To Handle CPU-Bound Tasks with Web Workers tutorial. If you want to learn how to avoid CPU-bound tasks affecting your application’s request/response cycle, check out How To Handle Asynchronous Tasks with Node.js and BullMQ.
My Output:
[2023-02-08 17:27:15] > node dist/src/main
[2023-02-08 17:27:15]
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [NestFactory] Starting Nest application...
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] MongooseModule dependencies initialized +69ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] PassportModule dependencies initialized +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] JwtModule dependencies initialized +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] AppModule dependencies initialized +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] ConfigModule dependencies initialized +0ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] MongooseCoreModule dependencies initialized +235ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] MongooseModule dependencies initialized +16ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] MongooseModule dependencies initialized +0ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] MongooseModule dependencies initialized +2ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] UsersModule dependencies initialized +2ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] KpiModule dependencies initialized +2ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] HolidaysModule dependencies initialized +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [InstanceLoader] AuthModule dependencies initialized +0ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RoutesResolver] AppController {/}: +7ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/, GET} route +3ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RoutesResolver] UsersController {/users}: +0ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/users/users, GET} route +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/users/users/:username, GET} route +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/users/signup, POST} route +2ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RoutesResolver] KpiController {/kpi}: +0ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/kpi/kpis/:userID, GET} route +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/kpi/kpi, POST} route +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RoutesResolver] AuthController {/auth}: +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/auth/auth/login, POST} route +2ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RoutesResolver] HolidaysController {/holidays}: +1ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/holidays/holidays/:userID, GET} route +0ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [RouterExplorer] Mapped {/holidays/holiday, POST} route +3ms
[2023-02-08 17:27:18] [Nest] 17 - 02/08/2023, 5:27:18 PM LOG [NestApplication] Nest application successfully started +5ms
[]
As you can see Nest has started up fine
My App Spec
alerts:
- rule: DEPLOYMENT_FAILED
- rule: DOMAIN_FAILED
name: vmosbackend
region: lon
services:
- build_command: npm run build
environment_slug: node-js
github:
branch: master
deploy_on_push: true
repo: VST-GLOBAL/vstkpibackend
http_port: 8080
instance_count: 1
instance_size_slug: basic-xxs
name: vstkpibackend
routes:
- path: /
run_command: npm run start:prod
source_dir: /
Any ideas?
]]>I believe it is an issue with my nodejs / nginx setup, but I’m not sure what is going on, and I’m not sure if I’m missing something. Here is where I setup the express server in nodejs:
_if_ (process.env.NODE_ENV === "production") {
const privateKey = fs.readFileSync('/etc/letsencrypt/live/domain/privkey.pem', 'utf8');
const certificate = fs.readFileSync('/etc/letsencrypt/live/domain/cert.pem', 'utf8');
const ca = fs.readFileSync('/etc/letsencrypt/live/domain/chain.pem', 'utf8');
const credentials = {
key: privateKey,
cert: certificate,
ca: ca
};
_// create the HTTPS server on port 443_
var https_server = https.createServer(credentials, app).listen(443, function(err){
console.log("Node.js Express HTTPS Server Listening on Port 443");
});
_// HTTP server on port 80 and redirect to HTTPS_
var http_server = http.createServer(function(req,res){
_// 301 redirect (reclassifies google listings)_
res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
res.end();
}).listen(80, function(err){
console.log("Node.js Express HTTPS Server Listening on Port 80");
});
_// create a_
} _else_ {
app.listen(process.env.PORT || 3000, () => {
console.log(`Running app backend on Port ${process.env.PORT}`);
})
}
and here is what /etc/nginx/sites-enabled/default
looks like:
# HTTP — redirect all traffic to HTTPS
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS — proxy all requests to the Node app
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
when I run the app in NODE_ENV=development it actually works fine (going to www.domain.com works and returns the right thing), however trying to run it production i get a 502 bad gateway, any ideas what I might be doing wrong in my setup?
]]>npx db-migrate-up
on development database it throws following error on App platform console
AssertionError [ERR_ASSERTION]: ifError got unwanted exception: permission denied for database [dbname]
As I noticed I cannot login as doadmin
to change my current user permissions either, how could I resolve this?
even after login using psql $DATABASE_URL
I cannot create a schema
, since I do not have CREATE
permissions on the database.
Error: self signed certificate in certificate chain
or disabling it by setting Node env.
I do not think copying content from ${_self.CA_CERT}
env to file and uploading it to repository and referring it as file approach sounds safe. There is support page describing saving it using script and referring it at runtime, I tried it like
DATABASE_URL=${self.DATABASE_URL}&sslrootcert=/workspace/ca_cert.cert
even tried setting NODE_EXTRA_CA_CERTS
I am using Postgraphile as middleware in my case, trying to connect to dev database.
]]>I’m happy to add more context to this post. Please let me know :)
[2023-01-31 04:26:22] ╭──────────── app upload ───────────╼
[2023-01-31 04:26:22] │ › uploading app container image to DOCR
[2023-01-31 04:26:26] │ Adding layer 'heroku/nodejs-engine:nodejs'
[2023-01-31 04:26:28] │ Adding layer 'heroku/nodejs-engine:yarn'
[2023-01-31 04:26:35] │ Adding 2/2 app layer(s)
[2023-01-31 04:26:35] │ Adding layer 'launcher'
[2023-01-31 04:26:35] │ Adding layer 'config'
[2023-01-31 04:26:35] │ Adding label 'io.buildpacks.lifecycle.metadata'
[2023-01-31 04:26:35] │ Adding label 'io.buildpacks.build.metadata'
[2023-01-31 04:26:35] │ Adding label 'io.buildpacks.project.metadata'
[2023-01-31 04:26:35] │ Saving <image-1>...
[2023-01-31 04:26:36] │ *** Images (sha256:335557df0a67859bf33c044925cc8be5d032a6fe7b87b54297fb89556d5c117c):
[2023-01-31 04:26:36] │ <image-2> - Post <registry-uri-4> GET <do-api-uri-3> unexpected status code 400 Bad Request
[2023-01-31 04:26:36] │ ERROR: failed to export: failed to write image to the following tags: [<image-5>: Post <registry-uri-7> GET <do-api-uri-6> unexpected status code 400 Bad Request]
[2023-01-31 04:26:36] │
[2023-01-31 04:26:36] │ command exited with code 246
[2023-01-31 04:26:36] │
[2023-01-31 04:26:36] │ » an error occurred, trying again (attempt #1)
[2023-01-31 04:26:43] │ Adding layer 'heroku/nodejs-engine:nodejs'
[2023-01-31 04:26:45] │ Adding layer 'heroku/nodejs-engine:yarn'
[2023-01-31 04:26:49] │ Adding 2/2 app layer(s)
[2023-01-31 04:26:49] │ Adding layer 'launcher'
[2023-01-31 04:26:49] │ Adding layer 'config'
[2023-01-31 04:26:49] │ Adding label 'io.buildpacks.lifecycle.metadata'
[2023-01-31 04:26:49] │ Adding label 'io.buildpacks.build.metadata'
[2023-01-31 04:26:49] │ Adding label 'io.buildpacks.project.metadata'
[2023-01-31 04:26:49] │ Saving <image-1>...
[2023-01-31 04:26:49] │ *** Images (sha256:335557df0a67859bf33c044925cc8be5d032a6fe7b87b54297fb89556d5c117c):
[2023-01-31 04:26:49] │ <image-2> - Post <registry-uri-4> GET <do-api-uri-3> unexpected status code 400 Bad Request
[2023-01-31 04:26:50] │ ERROR: failed to export: failed to write image to the following tags: [<image-5>: Post <registry-uri-7> GET <do-api-uri-6> unexpected status code 400 Bad Request]
[2023-01-31 04:26:50] │
[2023-01-31 04:26:50] │ command exited with code 246
[2023-01-31 04:26:50] │ ✘ image upload failed
]]>Been working on a nodejs/mongodb/graphql/react app (in web3 space). I’m currently testing the app with a service and a static front-end within the app platform.
The front-end users can start a process to index their wallet, this process takes between 10 and 120 seconds, let’s say 50 seconds on average. I want to keep the user updated during the process through polling (somehow can’t get web sockets to work, separate port issue? Not sure). I’m not expecting a gigantic amount of traffic, but I need 1 external API (which is rate limited) to always work.
So I can imagine a queuing system of some sort and delegate parts of the code (the indexing function) to a separate function/worker or service.
Although I’ve managed to set up a droplet and did get my app running there, I prefer to stay serverless within the app platform as much as possible for now.
Could probably build a simple queue with a database increment/decrement. But I still like to know where to place the indexing function (function/worker/service) to also optimize that part. Any tips/ideas are welcome.
]]>I’m using mongoose (v6.8.3) to connect to my mongodb database. As far as I know my code should work just fine:
let options = {
useNewUrlParser: true,
useUnifiedTopology: true,
};
const dbCertPath = path.resolve('./ca-certificate.crt');
if (envVars.CA_CERT) {
fs.writeFileSync(dbCertPath, envVars.CA_CERT);
options = {
useNewUrlParser: true,
useUnifiedTopology: true,
ssl: true,
sslCA: `${dbCertPath}`,
};
}
mongoose
.connect(mongodbUrl, options)
.then(() => {
logger.info('Connected to MongoDB');
})
.catch((error) => {
logger.error(error);
});
]]>Setup Nginx > working perfectly Security certificate > working perfectly Setup vsftpd & secured > working perfectly
Installed Node-red > working fine on http and port 1880 Not working on node-red.domainname.com but on domainname:1880 Setup Credentials Admin & password > working As soon as I block port 1880 or allow line "uiHost: “127.0.0.1”, I cant access the node-red server. Regular website still works fine and is getting https.
So I have two issues:
Please help guide me though this. I have been stuck at it for a few days now
]]>Error: Failed to launch the browser process!
/workspace/node_modules/puppeteer/.local-chromium/linux-1022525/chrome-linux/chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
From what I’ve heard, the solution seems to be to install the missing dependencies by running sudo apt-get install
, but I can’t do that on the app platform.
Does anyone know how to solve this? Thanks.
]]>Web applications have request/response cycles. When you visit a URL, the browser sends a request to the server running an app that processes data or runs queries in the database. As this happens, the user is kept waiting until the app returns a response. For some tasks, the user can get a response quickly; for time-intensive tasks, such as processing images, analyzing data, generating reports, or sending emails, these tasks take a long time to finish and can slow down the request/response cycle. For example, suppose you have an application where users upload images. In that case, you might need to resize, compress, or convert the image to another format to preserve your server’s disk space before showing the image to the user. Processing an image is a CPU-intensive task, which can block a Node.js thread until the task is finished. That might take a few seconds or minutes. Users have to wait for the task to finish to get a response from the server.
To avoid slowing down the request/response cyrcle, you can use bullmq
, a distributed task (job) queue that allows you to offload time-consuming tasks from your Node.js app to bullmq
, freeing up the request/response cycle. This tool enables your app to send responses to the user quickly while bullmq
executes the tasks asynchronously in the background and independently from your app. To keep track of jobs, bullmq
uses Redis to store a short description of each job in a queue. A bullmq
worker then dequeues and executes each job in the queue, marking it complete once done.
In this article, you will use bullmq
to offload a time-consuming task into the background, which will enable an application to respond quickly to users. First, you will create an app with a time-consuming task without using bullmq
. Then, you will use bullmq
to execute the task asynchronously. Finally, you will install a visual dashboard to manage bullmq
jobs in a Redis queue.
To follow this tutorial, you will need the following:
Node.js development environment set up. For Ubuntu 22.04, follow our tutorial on How To Install Node.js on Ubuntu 22.04. For other systems, see How to Install Node.js and Create a Local Development Environment.
Redis installed on your system. On Ubuntu 22, follow Steps 1 through 3 in our tutorial on How To Install and Secure Redis on Ubuntu 22.04. For other systems, see our tutorial on How To Install and Secure Redis.
Familiarity with promises and async/await functions, which you can develop in our tutorial Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript.
Basic knowledge of how to use Express. See our tutorial on How To Get Started with Node.js and Express.
Familiarity with Embedded JavaScript (EJS). Check out our tutorial on How To Use EJS to Template Your Node Application for more details.
Basic understanding of how to process images with sharp
, which you can learn in our tutorial on How To Process Images in Node.js with Sharp.
In this step, you will create a directory and install the necessary dependencies for your application. The application you’ll build in this tutorial will allow users to upload an image, which is then processed using the sharp
package. Image processing is time-intensive and can slow the request/response cycle, making the task a good candidate for bullmq
to offload into the background. The technique you will use to offload the task will also work for other time-intensive tasks.
To begin, create a directory called image_processor
and navigate into the directory:
- mkdir image_processor && cd image_processor
Then, initialize the directory as an npm package:
- npm init -y
The command creates a package.json
file. The -y
option tells npm to accept all the defaults.
Upon running the command, your output will match the following:
OutputWrote to /home/sammy/image_processor/package.json:
{
"name": "image_processor",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
The output confirms that the package.json
file has been created. Important properties include the name of your app (name
), your application version number (version
), and the starting point of your project (main
). If you want to learn more about the other properties, you can review npm’s package.json documentation.
The application you will build in this tutorial will require the following dependencies:
express
: a web framework for building web apps.express-fileupload
: a middleware that allows your forms to upload files.sharp
: an image processing library.ejs
: a template language that allows you to generate HTML markup with Node.js.bullmq
: a distributed task queue.bull-board
: a dashboard that builds upon bullmq
and displays the status of the jobs with a nice User Interface(UI).To install all these dependencies, run the following command:
- npm install express express-fileupload sharp ejs bullmq @bull-board/express
In addition to the dependencies you installed, you will also use the following image later in this tutorial:
Use curl
to download the image to the location of your choice on your local computer
- curl -O https://deved-images.nyc3.cdn.digitaloceanspaces.com/CART-68886/underwater.png
You have the necessary dependencies to build a Node.js app that does not have bullmq
, which you will do next.
bullmq
In this step, you will build an application with Express that allows users to upload images. The app will start a time-intensive task using sharp
to resize the image into multiple sizes, which are then displayed to the user after a response is sent. This step will help you understand how time-intensive tasks affect the request/response cycle.
Using nano
, or your preferred text editor, create the index.js
file:
- nano index.js
In your index.js
file, add the following code to import dependencies:
const path = require("path");
const fs = require("fs");
const express = require("express");
const bodyParser = require("body-parser");
const sharp = require("sharp");
const fileUpload = require("express-fileupload");
In the first line, you import the path
module for computing file paths with Node. In the second line, you import the fs
module for interacting with directories. You then import the express
web framework. You import the body-parser
module to add middleware to parse data in HTTP requests. Following that, you import the sharp
module for image processing. Finally, you import express-fileupload
for handling uploads from an HTML form.
Next, add the following code to implement middleware in your app:
...
const app = express();
app.set("view engine", "ejs");
app.use(bodyParser.json());
app.use(
bodyParser.urlencoded({
extended: true,
})
);
First, you set the app
variable to an instance of Express. Second, using the app
variable, the set()
method configures Express to use the ejs
template language. You then add the body-parser
module middleware with the use()
method to transform JSON data in HTTP requests into variables that can be accessed with JavaScript. In the following line, you do the same with URL-encoded input.
Next, add the following lines to add more middleware to handle file uploads and serve static files:
...
app.use(fileUpload());
app.use(express.static("public"));
You add middleware to parse uploaded files by calling the fileUpload()
method, and you set a directory where Express will look at and serve static files, such as images and CSS.
With the middleware set, create a route that displays an HTML form for uploading an image:
...
app.get("/", function (req, res) {
res.render("form");
});
Here, you use the get()
method of the Express module to specify the /
route and the callback that should run when the user visits the homepage or /
route. In the callback, you invoke res.render()
to render the form.ejs
file in the views
directory. You have not yet created the form.ejs
file or the views
directory.
To create it, first, save and close your file. In your terminal, enter the following command to create the views
directory in your project root directory:
- mkdir views
Move into the views
directory:
- cd views
Create the form.ejs
file in your editor:
- nano form.ejs
In your form.ejs
file, add the following code to create the form:
<!DOCTYPE html>
<html lang="en">
<%- include('./head'); %>
<body>
<div class="home-wrapper">
<h1>Image Processor</h1>
<p>
Resizes an image to multiple sizes and converts it to a
<a href="https://en.wikipedia.org/wiki/WebP">webp</a> format.
</p>
<form action="/upload" method="POST" enctype="multipart/form-data">
<input
type="file"
name="image"
placeholder="Select image from your computer"
/>
<button type="submit">Upload Image</button>
</form>
</div>
</body>
</html>
First, you reference the head.ejs
file, which you haven’t created yet. The head.ejs
file will contain the HTML head
element you can reference in other HTML pages.
In the body
tag, you create a form with the following attributes:
action
specifies the route where the form data should be sent when the form is submitted.method
specifies the HTTP method for sending data. The POST
method embeds the data in an HTTP request.encytype
specifies how the form data should be encoded. The value multipart/form-data
enables the HTML input
elements to upload file data.In the form
element, you create an input
tag to upload files. Then you define the button
element with the type
attribute set to submit
, which lets you submit forms.
Once finished, save and close your file.
Next, create a head.ejs
file:
- nano head.ejs
In your head.ejs
file, add the following code to create the head section of the app:
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Image Processor</title>
<link rel="stylesheet" href="css/main.css" />
</head>
Here, you reference the main.css
file, which you will create in the public
directory later in this step. That file will contain the styles for this application. For now, you will continue setting up the processes for static assets.
Save and close the file.
To handle data submitted from the form, you must define a post
method in Express. To do that, return to the root directory of your project:
- cd ..
Open your index.js
file again:
- nano index.js
In your index.js
file, add the highlighted lines to define a method for handling form submissions on route /upload
:
app.get("/", function (req, res) {
...
});
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
});
You use the app
variable to call the post()
method, which will handle the submitted form on the /upload
route. Next, you extract the uploaded image data from the HTTP request into the image
variable. After that, you set a response to return a 400
status code if the user does not upload an image.
To set the process for the uploaded image, add the following highlighted code:
...
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
const imageName = path.parse(image.name).name;
const processImage = (size) =>
sharp(image.data)
.resize(size, size)
.webp({ lossless: true })
.toFile(`./public/images/${imageName}-${size}.webp`);
sizes = [90, 96, 120, 144, 160, 180, 240, 288, 360, 480, 720, 1440];
Promise.all(sizes.map(processImage));
});
These lines represent how your app will process the image. First, you remove the image extension from the uploaded image and save the name in the imageName
variable. Next, you define the processImage()
function. This function takes the size
parameter, whose value will be used to determine the image dimensions during resizing. In the function, you invoke sharp()
with image.data
, which is a buffer containing the binary data for the uploaded image. sharp
resizes the image according to the value in the size parameter. You use the webp()
method from sharp
to convert the image to the webp image format. Then, you save the image in the public/images/
directory.
The subsequent list of numbers defines the sizes that will be used to resize the uploaded image. You then use JavaScript’s map()
method to invoke processImage()
for each element in the sizes
array, after which it will return a new array. Every time the map()
method calls the processImage()
function, it returns a promise to the new array. You use the Promise.all()
method to resolve them.
Computer processing speeds vary, as will the size of images a user can upload, which might affect the image processing speed. To delay this code for demonstration purposes, insert the highlighted lines to add a CPU-intensive increment loop and a redirect to a page that will display the resized images with the highlighted lines:
...
app.post("/upload", async function (req, res) {
...
let counter = 0;
for (let i = 0; i < 10_000_000_000; i++) {
counter++;
}
res.redirect("/result");
});
The loop will run 10 billion times to increment the counter
variable. You invoke the res.redirect()
function to redirect the app to the /result
route. The route will render an HTML page that will display the images in the public/images
directory.
The /result
route doesn’t exist yet. To create it, add the highlighted code in your index.js
file:
...
app.get("/", function (req, res) {
...
});
app.get("/result", (req, res) => {
const imgDirPath = path.join(__dirname, "./public/images");
let imgFiles = fs.readdirSync(imgDirPath).map((image) => {
return `images/${image}`;
});
res.render("result", { imgFiles });
});
app.post("/upload", async function (req, res) {
...
});
You define the /result
route with the app.get()
method. In the function, you define the imgDirPath
variable with the full path to the public/images
directory. You use the readdirSync()
method of the fs
module to read all the files in the given directory. From there, you chain the map()
method to return a new array with the images paths prefixed with images/
.
Finally, you call res.render()
to render the result.ejs
file, which doesn’t exist yet. You pass the imgFiles
variable, which contains an array of all the image’s relative paths, to the result.ejs
file.
Save and close your file.
To create the result.ejs
file, return to the views
directory:
- cd views
Create and open the result.ejs
file in your editor:
- nano result.ejs
In your result.ejs
file, add the following lines to display images:
<!DOCTYPE html>
<html lang="en">
<%- include('./head'); %>
<body>
<div class="gallery-wrapper">
<% if (imgFiles.length > 0){%>
<p>The following are the processed images:</p>
<ul>
<% for (let imgFile of imgFiles){ %>
<li><img src=<%= imgFile %> /></li>
<% } %>
</ul>
<% } else{ %>
<p>
The image is being processed. Refresh after a few seconds to view the
resized images.
</p>
<% } %>
</div>
</body>
</html>
First, you reference the head.ejs
file. In the body
tag, you check if the imgFiles
variable is empty. If it has data, you iterate over each file and create an image for each array element. If imgFiles
is empty, you print a message that tells the user to Refresh after a few seconds to view the resized images.
.
Save and close your file.
Next, return to the root directory and create the public
directory that will contain your static assets:
- cd .. && mkdir public
Move into the public
directory:
- cd public
Create an images
directory that will keep the uploaded images:
- mkdir images
Next, create the css
directory and navigate to it:
- mkdir css && cd css
In your editor, create and open the main.css
file, which you referenced earlier in the head.ejs
file:
- nano main.css
In your main.css
file, add the following styles:
body {
background: #f8f8f8;
}
h1 {
text-align: center;
}
p {
margin-bottom: 20px;
}
a:link,
a:visited {
color: #00bcd4;
}
/** Styles for the "Choose File" button **/
button[type="submit"] {
background: none;
border: 1px solid orange;
padding: 10px 30px;
border-radius: 30px;
transition: all 1s;
}
button[type="submit"]:hover {
background: orange;
}
/** Styles for the "Upload Image" button **/
input[type="file"]::file-selector-button {
border: 2px solid #2196f3;
padding: 10px 20px;
border-radius: 0.2em;
background-color: #2196f3;
}
ul {
list-style: none;
padding: 0;
display: flex;
flex-wrap: wrap;
gap: 20px;
}
.home-wrapper {
max-width: 500px;
margin: 0 auto;
padding-top: 100px;
}
.gallery-wrapper {
max-width: 1200px;
margin: 0 auto;
}
These lines will style elements in the app. Using HTML attributes, you style the Choose File button background with the hex code #2196f3
(a shade of blue) and the Upload Image button border to orange
. You also style the elements on the /result
route to make them more presentable.
Once finished, save and close your file.
Return to the project root directory:
- cd ../..
Open index.js
in your editor:
- nano index.js
In your index.js
, add the following code, which will start the server:
...
app.listen(3000, function () {
console.log("Server running on port 3000");
});
The complete index.js
file will now match the following:
const path = require("path");
const fs = require("fs");
const express = require("express");
const bodyParser = require("body-parser");
const sharp = require("sharp");
const fileUpload = require("express-fileupload");
const app = express();
app.set("view engine", "ejs");
app.use(bodyParser.json());
app.use(
bodyParser.urlencoded({
extended: true,
})
);
app.use(fileUpload());
app.use(express.static("public"));
app.get("/", function (req, res) {
res.render("form");
});
app.get("/result", (req, res) => {
const imgDirPath = path.join(__dirname, "./public/images");
let imgFiles = fs.readdirSync(imgDirPath).map((image) => {
return `images/${image}`;
});
res.render("result", { imgFiles });
});
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
const imageName = path.parse(image.name).name;
const processImage = (size) =>
sharp(image.data)
.resize(size, size)
.webp({ lossless: true })
.toFile(`./public/images/${imageName}-${size}.webp`);
sizes = [90, 96, 120, 144, 160, 180, 240, 288, 360, 480, 720, 1440];
Promise.all(sizes.map(processImage));
let counter = 0;
for (let i = 0; i < 10_000_000_000; i++) {
counter++;
}
res.redirect("/result");
});
app.listen(3000, function () {
console.log("Server running on port 3000");
});
Once you are finished making the changes, save and close your file.
Run the app using the node
command:
- node index.js
You will receive an output like so:
OutputServer running on port 3000
This output confirms the server is running without any issues.
Open your preferred browser and visit http://localhost:3000/
.
Note: If you are following the tutorial on a remote server, you can access the app in your local browser using port forwarding.
While the Node.js server is running, open another terminal and enter the following command:
- ssh -L 3000:localhost:3000 your-non-root-user@yourserver-ip
Once you have connected to the server, run node index.js
and then navigate to http://localhost:3000/
on your local machine’s web browser.
When the page loads, it will match the following:
Next, press the Choose File button and select the underwater.png
image on your local machine. The display will switch from No file chosen to underwater.png. After that, press the Upload Image button. The app will load for a while as it processes the image and runs the incrementing loop.
Once the task finishes, the /result
route will load with the resized images:
You can stop the server now with CTRL+C
. Node.js does not automatically reload the server when files are changed, so you will need to stop and restart the server whenever you update the files.
You now know how a time-intensive task can affect an application’s request/response cycle. You will execute the task asynchronously next.
bullmq
In this step, you will offload a time-intensive task to the background using bullmq
. This adjustment will free the request/response cycle and allow your app to respond to users immediately while the image is being processed.
To do that, you need to create a succinct description of the job and add it to a queue with bullmq
. A queue is a data structure that works similarly to how a queue works in real life. When people line up to enter a space, the first person on the line will be the first person to enter the space. Anyone who comes later will line up at the end of the line and will enter the space after everyone who precedes them in line until the last person enters the space. With the queue data structure’s First-In, First-Out (FIFO) process, the first item added to the queue is the first item to be removed (dequeue). With bullmq
, a producer will add a job in a queue, and a consumer (or worker) will remove a job from the queue and execute it.
The queue in bullmq
is in Redis. When you describe a job and add it to the queue, an entry for the job is created in a Redis queue. A job description can be a string or an object with properties that contain minimal data or references to the data that will allow bullmq
to execute the job later. Once you define the functionality to add jobs to the queue, you move the time-intensive code into a separate function. Later, bullmq
will call this function with the data you stored in the queue when the job is dequeued. Once the task has finished, bullmq
will mark it completed, pull another job from the queue, and execute it.
Open index.js
in your editor:
- nano index.js
In your index.js
file, add the highlighted lines to create a queue in Redis with bullmq
:
...
const fileUpload = require("express-fileupload");
const { Queue } = require("bullmq");
const redisOptions = { host: "localhost", port: 6379 };
const imageJobQueue = new Queue("imageJobQueue", {
connection: redisOptions,
});
async function addJob(job) {
await imageJobQueue.add(job.type, job);
}
...
You start by extracting the Queue
class from bullmq
, which is used to create a queue in Redis. You then set the redisOptions
variable to an object with properties that the Queue
class instance will use to establish a connection with Redis. You set the host
property value to localhost
because Redis is running on your local machine.
Note: If Redis were running on a remote server separate from your app, you would update the host
property value to the IP address of the remote server. You also set the port
property value to 6379
, the default port that Redis uses to listen for connections.
If you have set up port forwarding to a remote server running Redis and the app together, you do not need to update the host
property, but you will need to use the port forwarding connection every time you log in to your server to run the app.
Next, you set the imageJobQueue
variable to an instance of the Queue
class, taking the queue’s name as its first argument and an object as a second argument. The object has a connection
property with the value set to an object in the redisOptions
variable. After instantiating the Queue
class, a queue called imageJobQueue
will be created in Redis.
Finally, you define the addJob()
function that you will use to add a job in the imageJobQueue
. The function takes a parameter of job
containing the information about the job (you will call the addJob()
function with the data you want to save in a queue). In the function, you invoke the add()
method of the imageJobQueue
, taking the name of the job as the first argument and the job data as the second argument.
Add the highlighted code to call the addJob()
function to add a job in the queue:
...
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
const imageName = path.parse(image.name).name;
...
await addJob({
type: "processUploadedImages",
image: {
data: image.data.toString("base64"),
name: image.name,
},
});
res.redirect("/result");
});
...
Here, you call the addJob()
function with an object that describes the job. The object has the type
attribute with a value of the name of the job. The second property, image
, is set to an object containing the image data the user has uploaded. Because the image data in image.data
is in a buffer (binary form), you invoke JavaScript’s toString()
method to convert it to a string that can be stored in Redis, which will set the data
property as a result. The image
property is set to the name of the uploaded image (including the image extension).
You have now defined the information needed for bullmq
to execute this job later. Depending on your job, you may add more job information or less.
Warning: Since Redis is an in-memory database, avoid storing large amounts of data for jobs in the queue. If you have a large file that a job needs to process, save the file on the disk or the cloud, then save the link to the file as a string in the queue. When bullmq
executes the job, it will fetch the file from the link saved in Redis.
Save and close your file.
Next, create and open the utils.js
file that will contain the image processing code:
- nano utils.js
In your utils.js
file, add the following code to define the function for processing an image:
const path = require("path");
const sharp = require("sharp");
function processUploadedImages(job) {
}
module.exports = { processUploadedImages };
You import the modules necessary to process images and compute paths in the first two lines. Then you define the processUploadedImages()
function, which will contain the time-intensive image processing task. This function takes a job
parameter that will be populated when the worker fetches the job data from the queue and then invokes the processUploadedImages()
function with the queue data. You also export the processUploadedImages()
function so that you can reference it in other files.
Save and close your file.
Return to the index.js
file:
- nano index.js
Copy the highlighted lines from the index.js
file, then delete them from this file. You will need the copied code momentarily, so save it to a clipboard. If you are using nano
, you can highlight these lines and right-click with your mouse to copy the lines:
...
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
const imageName = path.parse(image.name).name;
const processImage = (size) =>
sharp(image.data)
.resize(size, size)
.webp({ lossless: true })
.toFile(`./public/images/${imageName}-${size}.webp`);
sizes = [90, 96, 120, 144, 160, 180, 240, 288, 360, 480, 720, 1440];
Promise.all(sizes.map(processImage))
let counter = 0;
for (let i = 0; i < 10_000_000_000; i++) {
counter++;
};
...
res.redirect("/result");
});
The post
method for the upload
route will now match the following:
...
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
await addJob({
type: "processUploadedImages",
image: {
data: image.data.toString("base64"),
name: image.name,
},
});
res.redirect("/result");
});
...
Save and close this file, then open the utils.js
file:
- nano utils.js
In your utils.js
file, paste the lines you just copied for the /upload
route callback into the processUploadedImages
function:
...
function processUploadedImages(job) {
const imageName = path.parse(image.name).name;
const processImage = (size) =>
sharp(image.data)
.resize(size, size)
.webp({ lossless: true })
.toFile(`./public/images/${imageName}-${size}.webp`);
sizes = [90, 96, 120, 144, 160, 180, 240, 288, 360, 480, 720, 1440];
Promise.all(sizes.map(processImage));
let counter = 0;
for (let i = 0; i < 10_000_000_000; i++) {
counter++;
};
}
...
Now that you have moved the code for processing an image, you need to update it to use the image data from the job
parameter of the processUploadedImages()
function you defined earlier.
To do that, add and update the highlighted lines below:
function processUploadedImages(job) {
const imageFileData = Buffer.from(job.image.data, "base64");
const imageName = path.parse(job.image.name).name;
const processImage = (size) =>
sharp(imageFileData)
.resize(size, size)
.webp({ lossless: true })
.toFile(`./public/images/${imageName}-${size}.webp`);
...
}
You convert the stringified version of the image data back to binary with the Buffer.from()
method. Then you update path.parse()
with a reference to the image name saved in the queue. After that, you update the sharp()
method to take the image binary data stored in the imageFileData
variable.
The complete utils.js
file will now match the following:
const path = require("path");
const sharp = require("sharp");
function processUploadedImages(job) {
const imageFileData = Buffer.from(job.image.data, "base64");
const imageName = path.parse(job.image.name).name;
const processImage = (size) =>
sharp(imageFileData)
.resize(size, size)
.webp({ lossless: true })
.toFile(`./public/images/${imageName}-${size}.webp`);
sizes = [90, 96, 120, 144, 160, 180, 240, 288, 360, 480, 720, 1440];
Promise.all(sizes.map(processImage));
let counter = 0;
for (let i = 0; i < 10_000_000_000; i++) {
counter++;
};
}
module.exports = { processUploadedImages };
Save and close your file, then return to the index.js
:
- nano index.js
The sharp
variable is no longer needed as a dependency since the image is now processed in the utils.js
file. Delete the highlighted line from the file:
const bodyParser = require("body-parser");
const sharp = require("sharp");
const fileUpload = require("express-fileupload");
const { Queue } = require("bullmq");
...
Save and close your file.
You have now defined the functionality to create a queue in Redis and add a job. You also defined the processUploadedImages()
function to process uploaded images.
The remaining task is to create a consumer (or worker) that will pull a job from the queue and call the processUploadedImages()
function with the job data.
Create a worker.js
file in your editor:
- nano worker.js
In your worker.js
file, add the following code:
const { Worker } = require("bullmq");
const { processUploadedImages } = require("./utils");
const workerHandler = (job) => {
console.log("Starting job:", job.name);
processUploadedImages(job.data);
console.log("Finished job:", job.name);
return;
};
In the first line, you import the Worker
class from bullmq
; when instantiated, this will start a worker that dequeues jobs from the queue in Redis and executes them. Next, you reference the processUploadedImages()
function from the utils.js
file so that the worker can call the function with the data in the queue.
You define a workerHandler()
function that takes a job
parameter containing the job data in the queue. In the function, you log that the job has started, then invoke processUploadedImages()
with the job data. After that, you log a success message and return null
.
To allow the worker to connect to Redis, dequeue a job from the queue, and call the workerHandler()
with the job data, add the following lines to the file:
...
const workerOptions = {
connection: {
host: "localhost",
port: 6379,
},
};
const worker = new Worker("imageJobQueue", workerHandler, workerOptions);
console.log("Worker started!");
Here, you set the workerOptions
variable to an object containing Redis’s connection settings. You set the worker
variable to an instance of the Worker
class that takes the following parameters:
imageJobQueue
: the name of the job queue.workerHandler
: the function that will run after a job has been dequeued from the Redis queue.workerOptions
: the Redis config settings that the worker uses to establish a connection with Redis.Finally, you log a success message.
After adding the lines, save and close your file.
You have now defined the bullmq
worker functionality to dequeue jobs from the queue and execute them.
In your terminal, remove the images in the public/images
directory so that you can start fresh for testing your app:
- rm public/images/*
Next, run the index.js
file:
- node index.js
The app will start:
OutputServer running on port 3000
You’ll now start the worker. Open a second terminal session and navigate to the project directly:
- cd image_processor/
Start the worker with the following command:
- node worker.js
The worker will start:
OutputWorker started!
Visit http://localhost:3000/
in your browser. Press the Choose File button and select the underwater.png
from your computer, then press the Upload Image button.
You may receive an instant response that tells you to refresh the page after a few seconds:
Alternatively, you might receive an instant response with some processed images on the page while others are still being processed:
You can refresh the page a few times to load all the resized images.
Return to the terminal where your worker is running. That terminal will have a message that matches the following:
OutputWorker started!
Starting job: processUploadedImages
Finished job: processUploadedImages
The output confirms that bullmq
ran the job successfully.
Your app can still offload time-intensive tasks even if the worker is not running. To demonstrate this, stop the worker in the second terminal with CTRL+C
.
In your initial terminal session, stop the Express server and remove the images in public/images
:
- rm public/images/*
After that, start the server again:
- node index.js
In your browser, visit http://localhost:3000/
and upload the underwater.png
image again. When you are redirected to the /result
path, the images will not show on the page because the worker is not running:
Return to the terminal where you ran the worker and start the worker again:
- node worker.js
The output will match the following, which lets you know that the job has started:
OutputWorker started!
Starting job: processUploadedImages
After the job has been completed and the output includes a line that reads Finished job: processUploadedImages
, refresh the browser. The images will now load:
Stop the server and the worker.
You now can offload a time-intensive task to the background and execute it asynchronously using bullmq
. In the next step, you will set up a dashboard to monitor the status of the queue.
bullmq
QueuesIn this step, you will use the bull-board
package to monitor the jobs in the Redis queue from a visual dashboard. This package will automatically create a user interface (UI) dashboard that displays and organizes the information about the bullmq
jobs that are stored in the Redis queue. Using your browser, you can monitor the jobs that are completed, are waiting, or have failed without opening the Redis CLI in the terminal.
Open the index.js
file in your text editor:
- nano index.js
Add the highlighted code to import bull-board
:
...
const { Queue } = require("bullmq");
const { createBullBoard } = require("@bull-board/api");
const { BullMQAdapter } = require("@bull-board/api/bullMQAdapter");
const { ExpressAdapter } = require("@bull-board/express");
...
In the preceding code, you import the createBullBoard()
method from bull-board
. You also import BullMQAdapter
, which allows bull-board
access to bullmq
queues, and ExpressAdapter
, which provides functionality for Express to display the dashboard.
Next, add the highlighted code to connect bull-board
with bullmq
:
...
async function addJob(job) {
...
}
const serverAdapter = new ExpressAdapter();
const bullBoard = createBullBoard({
queues: [new BullMQAdapter(imageJobQueue)],
serverAdapter: serverAdapter,
});
serverAdapter.setBasePath("/admin");
const app = express();
...
First, you set the serverAdapter
to an instance of the ExpressAdapter
. Next, you invoke createBullBoard()
to initialize the dashboard with the bullmq
queue data. You pass the function an object argument with queues
and serverAdapter
properties. The first property, queues
, accepts an array of the queues you defined with bullmq
, which is the imageJobQueue
here. The second property, serverAdapter
, contains an object that accepts an instance of the Express server adapter. After that, you set the /admin
path to access the dashboard with the setBasePath()
method.
Next, add the serverAdapter
middleware for the /admin
route:
app.use(express.static("public"))
app.use("/admin", serverAdapter.getRouter());
app.get("/", function (req, res) {
...
});
The complete index.js
file will match the following:
const path = require("path");
const fs = require("fs");
const express = require("express");
const bodyParser = require("body-parser");
const fileUpload = require("express-fileupload");
const { Queue } = require("bullmq");
const { createBullBoard } = require("@bull-board/api");
const { BullMQAdapter } = require("@bull-board/api/bullMQAdapter");
const { ExpressAdapter } = require("@bull-board/express");
const redisOptions = { host: "localhost", port: 6379 };
const imageJobQueue = new Queue("imageJobQueue", {
connection: redisOptions,
});
async function addJob(job) {
await imageJobQueue.add(job.type, job);
}
const serverAdapter = new ExpressAdapter();
const bullBoard = createBullBoard({
queues: [new BullMQAdapter(imageJobQueue)],
serverAdapter: serverAdapter,
});
serverAdapter.setBasePath("/admin");
const app = express();
app.set("view engine", "ejs");
app.use(bodyParser.json());
app.use(
bodyParser.urlencoded({
extended: true,
})
);
app.use(fileUpload());
app.use(express.static("public"));
app.use("/admin", serverAdapter.getRouter());
app.get("/", function (req, res) {
res.render("form");
});
app.get("/result", (req, res) => {
const imgDirPath = path.join(__dirname, "./public/images");
let imgFiles = fs.readdirSync(imgDirPath).map((image) => {
return `images/${image}`;
});
res.render("result", { imgFiles });
});
app.post("/upload", async function (req, res) {
const { image } = req.files;
if (!image) return res.sendStatus(400);
await addJob({
type: "processUploadedImages",
image: {
data: Buffer.from(image.data).toString("base64"),
name: image.name,
},
});
res.redirect("/result");
});
app.listen(3000, function () {
console.log("Server running on port 3000");
});
After you are done making changes, save and close your file.
Run the index.js
file:
- node index.js
Return to your browser and visit http://localhost:3000/admin
. The dashboard will load:
In the dashboard, you can review the job type, the data it consumes, and more information about the job. You can also switch to other tabs, such as the Completed tab for information about the completed jobs, the Failed tab for more information about the jobs that failed, and the Paused tab for more information about the jobs that have been paused.
You can now use the bull-board
dashboard to monitor queues.
In this article, you offloaded a time-intensive task to a job queue using bullmq
. First, without using bullmq
, you created an app with a time-intensive task that has a slow request/response cycle. Then you used bullmq
to offload the time-intensive task and execute asynchronously, which boosts the request/response cycle. After that, you used bull-board
to create a dashboard to monitor bullmq
queues in Redis.
You can visit the bullmq
documentation to learn more about bullmq
features not covered in this tutorial, such as scheduling, prioritizing or retrying jobs, and configuring concurrency settings for workers. You can also visit the bull-board
documentation to learn more about the dashboard features.
Express is a popular framework for building fast web apps and APIs with Node. DigitalOcean’s App Platform is a Platform as a Service (PaaS) product to configure and deploy applications from a code repository. It offers a quick and efficient way to deploy your Express app. In this tutorial, you’ll deploy an Express application to DigitalOcean App Platform and then scale it by adding caching with the DigitalOcean Marketplace Add-On for MemCachier. MemCachier is compliant with the memcached object caching system but has several advantages, such as better failure scenarios with high availability clusters.
You’ll first build an Express app that calculates a prime number, has a Like button, and uses a template engine. Those features will enable you to implement several caching strategies later. You’ll then push your app’s code to GitHub and deploy it on App Platform. Finally, you’ll implement three object caching techniques to make your app faster and more scalable. By the end of this tutorial, you’ll be able to deploy an Express application to App Platform, implementing techniques for caching resource-intensive computations, rendered views, and sessions.
In this step, you’ll install a template engine for Express, create a template for your app’s home route (GET /
), and update the route to use that template. A template engine enables you to cache rendered views later, increasing the speed of request handling and decreasing resource use.
To start, navigate to the project directory of the Express server with your editor if it is not already open. You can return to the prerequisite tutorial on How To Get Started with Node.js and Express to identify where you have saved your project files.
You will install a template engine for Express to use static template files in your application. A template engine replaces variables in a template file with values and transforms the template into an HTML file, which is sent as the response to a request. Using templates makes it easier to work with HTML.
Install the Embedded JavaScript templates (ejs
) library. If you prefer, you could use one of the other template engines that Express supports, like Mustache, Pug, or Nunjucks.
- npm install ejs
With ejs
now installed, you will configure your Express app to use it.
Open the file server.js
in your editor. Then, add the highlighted line:
const express = require('express');
const app = express();
app.set('view engine', 'ejs');
app.get('/', (req, res) => {
res.send('Successful response.');
});
...
This line sets the application setting property view engine
to ejs
.
Save the file.
Note: For this tutorial, you will use the view engine
setting, but another useful setting is views
. The views
setting tells an Express app where to find template files. The default value is ./views
.
Next, create a views
directory. Then, create the file views/index.ejs
and open it in your editor.
Add the starting template markup to that file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Find the largest prime number</title>
</head>
<body>
<h1>Find the largest prime number</h1>
<p>
For any number N, find the largest prime number less than or equal to N.
</p>
</body>
</html>
Save the file.
With the template created, you will update your route to use it.
Open the file server.js
and update the highlighted code:
...
app.get('/', (req, res) => {
res.render('index');
});
...
The response render
method takes the name of a template as its first parameter. In this case, index
matches the file views/index.ejs
.
Restart your app to load the changes. Stop the server if it’s running by pressing CTRL+C
in your terminal. Then start the server again:
- node server.js
Visit localhost:3000
in your web browser, which will now display the contents of your template.
Your app now has a template-rendered view, but it doesn’t do anything yet. You’ll add functionality to find a prime number next.
In this step, you’ll add the features to find a prime number and to like numbers using a Like button. You’ll use these features to interact with the app once you have deployed to App Platform in Step 4.
In this section, you’ll add a function to your app that finds the largest prime number less than or equal to N
, where N
refers to any number.
N
will be submitted via a form with the GET
method to the home route (/
), with N
appended as a query parameter: localhost:3000/?n=10
(where 10
is a sample query). The home route can have multiple URLs that produce rendered views, which can each be cached individually.
In views/index.ejs
, add a form with an input element for entering N
:
...
<p>
For any number N, find the largest prime number less than or equal to N.
</p>
<form action="/" method="get">
<label>
N
<input type="number" name="n" placeholder="e.g. 10" required>
</label>
<button>Find Prime</button>
</form>
...
The form’s action submits to /
, which is handled by the home route app.get('/' ...)
in server.js
. As the form’s method is set to get
, the data n
will be appended to the action URL as a query parameter.
Save the file.
Next, when a request is made with a query parameter of n
, you’ll pass that data to the template.
In server.js
, add the highlighted code:
...
app.get('/', (req, res) => {
const n = req.query.n;
if (!n) {
res.render('index');
return;
}
const locals = { n };
res.render('index', locals);
});
...
These lines will check if the request has a query parameter named n
. If so, you render the index
view and pass the value of n
to it. Otherwise, you generate the index
view without data.
Note: User input cannot always be trusted, so the best practice for a production-ready app would be to validate the input with a library such as Joi.
The render
method has a second optional parameter, locals
. This parameter defines local variables passed to a template to render a view. A shorthand property name defines the n
property of the locals
object. When a variable has the same name as the object property it’s being assigned to, the variable name can be omitted. So { n: n }
can be written as { n }
.
Save the file.
Now that the template has some data, you can display it.
In views/index.ejs
, add the highlighted lines to display the value of N
:
...
<form action="/" method="get">
<label>
N
<input type="number" name="n" placeholder="e.g. 10" required>
</label>
<button>Find Prime</button>
</form>
<% if (locals.n) { %>
<p>N: <%= n %></p>
<% } %>
...
If a local variable n
exists for this view, you tell the app to display it.
Save the file, then restart your server to refresh the app. The form will now load with a button to Find Prime. The app will be able to accept user input and display it under the form.
Submit any number to the form. After submitting the form, the URL will change to include an n
query parameter, such as http://localhost:3000/?n=40
if you put in 40
. The value you submitted will also load under the form as N: 40.
Now that a value for N
can be submitted and displayed, you’ll add a function to find the largest prime number less than or equal to N
. Then, you’ll display that result in your view.
Create a utils
directory. Then, create the file utils/findPrime.js
.
Open findPrime.js
in your editor and add the prime number finding function:
/**
* Find the largest prime number less than or equal to `n`
* @param {number} n A positive integer greater than the smallest prime number, 2
* @returns {number}
*/
module.exports = function (n) {
let prime = 2; // initialize with the smallest prime number
for (let i = n; i > 1; i--) {
let isPrime = true;
for (let j = 2; j < i; j++) {
if (i % j == 0) {
isPrime = false;
break;
}
}
if (isPrime) {
prime = i;
break;
}
}
return prime;
};
A JSDoc comment documents the function. The algorithm starts with the first prime number (2
), then loops through numbers, starting at n
and decrementing the number by 1
in each loop. The function will continue looping and searching for a prime number until the number is 2
, the smallest prime number.
Each loop assumes the current number is a prime number, then tests that assumption. It will check if the current number has a factor other than 1
and itself. If the current number can be divided by any number greater than 1
and less than itself without a remainder, then it is not a prime number. The function will then try the next number.
Save the file.
Next, import the find prime function into server.js
:
const express = require('express');
const findPrime = require('./utils/findPrime');
...
Update your home route controller to find a prime number and pass its value to the template. Still in server.js
, add the highlighted code:
...
app.get('/', (req, res) => {
const n = req.query.n;
if (!n) {
res.render('index');
return;
}
const prime = findPrime(n);
const locals = { n, prime };
res.render('index', locals);
});
...
Save the file.
Now, you will add code to display the result in your template. In views/index.ejs
, display the value of N
:
...
<form action="/" method="get">
<label>
N
<input type="number" name="n" placeholder="e.g. 10" required>
</label>
<button>Find Prime</button>
</form>
<% if (locals.n && locals.prime) { %>
<p>
The largest prime number less than or equal to <%= n %> is <strong><%= prime %></strong>.
</p>
<% } %>
...
Save the file.
Now restart the server.
To test the functionality, submit any number. As an example, this tutorial will use 10
. If you submit the number 10
, you will receive a response stating, The largest prime number less than or equal to 10 is 7.
.
Your app can now take a number, then find and display a prime number. Next, you’ll add a Like button.
Currently, your app can produce different views based on each number N
submitted. Apart from updating text, the content of those views is likely to stay the same. The Like button you’ll add in this section will provide a way to update the content of a view. This button demonstrates the need for invalidating a cached view when its contents change, which will be beneficial when caching rendered views later in the tutorial.
With a Like button, the app needs somewhere to store the like data. While persistent storage is ideal, you will store likes in memory because implementing a database is beyond the scope of this tutorial. As such, the data will be ephemeral, which means all data will be lost when the server stops.
Open server.js
to add the following variable:
...
app.set('view engine', 'ejs');
/**
* Key is `n`
* Value is the number of 'likes' for `n`
*/
const likesMap = {};
...
The likesMap
object is used as a map to store likes for all requested numbers. The key is n
, and its values are the number of likes for n
.
Likes for a number need to be initialized when a number is submitted. Still in the server.js
, add the highlighted lines to initialize likes for N
:
...
const prime = findPrime(n);
// Initialize likes for this number when necessary
if (!likesMap[n]) likesMap[n] = 0;
const locals = { n, prime };
res.render('index', locals);
...
This if
statement checks if likes for the current number exist. If no likes are present, then the likesMaps
number initializes to 0
.
Next, add likes as a local variable for the view:
...
const prime = findPrime(n);
// Initialize likes for this number when necessary
if (!likesMap[n]) likesMap[n] = 0;
const locals = { n, prime, likes: likesMap[n] };
res.render('index', locals);
...
Save the file.
Now that the view has data for likes, you can display its value and add a Like button.
In views/index.ejs
, add the Like button markup:
...
<% if (locals.n && locals.prime) { %>
<p>
The largest prime number less than or equal to <%= n %> is <strong><%= prime %></strong>.
</p>
<form action="/like" method="get">
<input type="hidden" name="n" value="<%= n %>">
<input type="submit" value="Like"> <%= likes %>
</form>
<% } %>
...
Your completed file should now match the following:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Find the largest prime number</title>
</head>
<body>
<h1>Find the largest prime number</h1>
<p>
For any number N, find the largest prime number less than or equal to N.
</p>
<form action="/" method="get">
<label>
N
<input type="number" name="n" placeholder="e.g. 10" required>
</label>
<button>Find Prime</button>
</form>
<% if (locals.n && locals.prime) { %>
<p>
The largest prime number less than or equal to <%= n %> is <strong><%= prime %></strong>.
</p>
<form action="/like" method="get">
<input type="hidden" name="n" value="<%= n %>">
<input type="submit" value="Like"> <%= likes %>
</form>
<% } %>
</body>
</html>
Save the file.
Restart the server, then submit a number. A Like button will appear after the prime number result with a like count of 0
.
Clicking the Like button sends a GET
request to /like
, with the current value of N
as a query parameter via a hidden input. For now, you’ll receive a 404 error with Cannot GET /like, because your app does not yet have a corresponding route.
You’ll now add the route to handle the request.
Back in server.js
, add the route:
...
app.get('/', (req, res) => {
...
});
app.get('/like', (req, res) => {
const n = req.query.n;
if (!n) {
res.redirect('/');
return;
}
likesMap[n]++;
res.redirect(`/?n=${n}`);
});
...
This new route checks if n
exists. If not, it redirects home. Otherwise, it increments likes for this number. Finally, it redirects to the view where the Like button was clicked.
Your completed file should now match the following:
const express = require('express');
const findPrime = require('./utils/findPrime');
const app = express();
app.set('view engine', 'ejs');
/**
* Key is `n`
* Value is the number of 'likes' for `n`
*/
const likesMap = {};
app.get('/', (req, res) => {
const n = req.query.n;
if (!n) {
res.render('index');
return;
}
const prime = findPrime(n);
// Initialize likes for this number when necessary
if (!likesMap[n]) likesMap[n] = 0;
const locals = { n, prime, likes: likesMap[n] };
res.render('index', locals);
});
app.get('/like', (req, res) => {
const n = req.query.n;
if (!n) {
res.redirect('/');
return;
}
likesMap[n]++;
res.redirect(`/?n=${n}`);
});
const port = process.env.PORT || 3000;
app.listen(port, () =>
console.log(`Example app is listening on port ${port}.`)
);
Save the file.
Restart the app and test the Like button again. The likes count will increment for each click.
Note: You could also use the POST
method instead of GET
for this route. It would be more RESTful because an update is made to a resource. This tutorial uses GET
rather than introducing form POST
request body handling so that you can work with the now familiar request query parameters.
Your app is now complete with fully functioning features, so you can prepare to deploy it to App Platform. In the next step, you’ll commit the app’s code with git
and push that code to GitHub.
In this step, you will create a code repository to hold all the files for your deployment. First, you will commit your code to git, and then you will push it to a GitHub repository. You will use this repository to deploy with App Platform.
In this section, you’ll commit your code to git, so it’s ready to push to GitHub.
Note: If you have not configured your settings with your username, be sure to set up Git and authenticate your GitHub account with SSH.
First, initialize a git
repository:
- git init
Next, tell Git to exclude your app’s dependencies. Create a new file called .gitignore
and add the following:
node_modules
# macOS file
.DS_Store
Note: The .DS_Store
line is specific to macOS and does not need to be present for other operating systems.
Save and close the file.
Now, add all files to git:
- git add .
Finally, commit those changes with the following command:
- git commit -m "Initial commit"
The -m
option is used to specify the commit message, which you can update with whatever message you wish.
After committing your code, you’ll receive an output like so:
Output[main (root-commit) deab84e] Initial commit
6 files changed, 1259 insertions(+)
create mode 100644 .gitignore
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 server.js
create mode 100644 utils/findPrime.js
create mode 100644 views/index.ejs
You have committed your code to git. Next, you’ll push it to GitHub.
Now that your app’s code is committed to git, you’re ready to push it to GitHub. You can then connect the code with DigitalOcean App Platform and deploy it.
First, in your browser, log in to GitHub and create a new repository called express-memcache
. Create an empty repository without README
, .gitignore
, or license files. You can make the repository either private or public. You can also review GitHub’s documentation on creating a new repository.
Back in your terminal, add your newly created repository as a remote origin, updating your username:
- git remote add origin https://github.com/your_username/express-memcache.git
This command tells Git where to push your code.
Next, rename the default branch main
:
- git branch -M main
Finally, push the code to your repository:
- git push -u origin main
Enter your credentials if prompted.
You’ll receive output similar to the following:
OutputEnumerating objects: 10, done.
Counting objects: 100% (10/10), done.
Delta compression using up to 8 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (10/10), 9.50 KiB | 9.50 MiB/s, done.
Total 10 (delta 0), reused 0 (delta 0), pack-reused 0
To https://github.com/your_username/express-memcache.git
* [new branch] main -> main
Branch 'main' set up to track remote branch 'main' from 'origin'.
Your app’s code is now on GitHub, ready to be deployed by App Platform.
In this step, you’ll deploy your Express app to DigitalOcean App Platform. You’ll create an App Platform app, permit it to access your GitHub repository, and then deploy it.
You’ll start by updating the environment settings so that your configuration can be read from the PORT
environment label.
In this section, you’ll expand your Express server to allow the app’s port configuration to be read from an environment variable. Because the configuration will likely change between deploys, this update will enable your app to read the port from its App Platform environment.
Open the file server.js
in your editor. Then, at the bottom of the file, update the highlighted code to replace the existing app.listen
line and add a new const port
line:
...
const port = process.env.PORT || 3000;
app.listen(port, () =>
console.log(`Example app is listening on port ${port}.`)
);
This code indicates to use a PORT
environment variable if it exists or default to port 3000
otherwise.
Your app will now be able to read the port from the App Platform environment to which you will now deploy it.
You can now set up your app with App Platform.
You will incur charges for running this app on App Platform, with web services billed by the second (starting at a minimum of one minute). Pricing is displayed on the Review screen. See App Platform Pricing for details.
First, log in to your DigitalOcean account. From the Apps dashboard, click Create, then Apps. You can also follow our product documentation on How to Create Apps in App Platform.
On the Create Resource From Source Code screen, select GitHub as the Service Provider. Then, give DigitalOcean permission to access your repository. The best practice is to select only the repository that you want deployed. If you haven’t done so, you’ll be prompted to install the DigitalOcean GitHub app. Select your repository from the list and click Next.
On the Resources screen, click Edit Plan to select your plan size. This tutorial will use the Basic Plan with the smallest size Web Services (512 MB RAM | 1 vCPU) for the express-memcache resource. The Basic Plan and smallest web service offer enough resources for this sample Express app. Once you have set your plan, click Back.
Next, click the Info tab on the left navigation bar and note the region your app is in. You’ll need this in the next step when you add a DigitalOcean Marketplace Add-On for MemCachier.
Finally, click on the Review tab, then click the Create Resources button to build and deploy your app. It will take a little while for the build to run. When it is finished, you will receive a success message with a link to your deployed app.
So far, you have created an Express app that finds a prime number and has a Like button. You committed the app’s code to Git and pushed it to GitHub, and then you deployed the app on App Platform.
To make your Express app faster and more scalable, you will implement three object caching strategies. You need a cache, which you’ll create in the next step.
In this step, you’ll create and configure an object cache. Any memcached-compatible cache will work for this tutorial. You will provision one with the MemCachier Add-On from the DigitalOcean Marketplace. A MemCachier cache is an in-memory key-value store.
First, you’ll add the MemCachier Add-On from the DigitalOcean Marketplace. Visit the MemCachier Add-On page and click Add MemCachier. On the next screen, select the region your App Platform app is in, which you noted earlier. Your app and cache should be in the same region so that latency is as low as possible. You can view your App Platform app’s settings if you need to find the region again. You can optionally select a plan. Then, click Add MemCachier to provision your cache.
To figure out region name-to-slug mappings, see DigitalOcean’s Available Datacenters. For example, the region San Francisco maps to sfo3.
Next, you’ll configure your Express app to use the cache. Visit the Add-Ons dashboard, then click the name of your MemCachier Add-On to open its dashboard. On the MemCachier Add-On dashboard, click the Show button for Configuration Variables to load a display with the values for MEMCACHIER_USERNAME
, MEMCACHIER_PASSWORD
, and MEMCACHIER_SERVERS
. Take note of these values because you will need them next.
You’ll now save your MemCachier configuration variables as environment variables for your app. Go back to your App Platform app’s dashboard and click Settings. Then, under Components, click express-memc…. Scroll to the Environment Variables section, click Edit, and then add your MemCachier configuration variables with the three keys (MEMCACHIER_USERNAME
, MEMCACHIER_PASSWORD
and MEMCACHIER_SERVERS
) and the corresponding values you got from the MemCachier dashboard. For MEMCACHIER_PASSWORD
, check Encrypt because the value is a password. Click Save to update the app.
Now, you’ll configure a memcache client in your Express app, using the environment variables you just saved so that the app can communicate with your cache.
In your terminal, install the memjs
library:
- npm install memjs
Next, create a services
directory. Then, create the file services/memcache.js
and open it in your editor. At the top of the file, import memjs
and configure a cache client:
const { Client } = require('memjs');
module.exports = Client.create(process.env.MEMCACHIER_SERVERS, {
failover: true,
timeout: 1,
keepAlive: true,
});
Save the file.
This code creates a MemCachier cache client. As for the options, failover
is set to true
to use MemCachier’s high-availability clusters. If a server fails, commands for all keys stored on that server will automatically be made to the next available server. A timeout
of 1
second is better for a deployed app than the default of 0.5
seconds. keepAlive: true
keeps connections to your cache open even when idle, which is desirable because making connections is slow, and caches must be fast to be effective.
You provisioned a cache using the MemCachier Add-On from the DigitalOcean Marketplace in this step. You then added your cache’s configuration settings as App Platform environment variables, enabling you to configure a client, using memjs
, so your Express app can communicate with the cache.
Everything is ready to start implementing caching in Express, which you’ll do next.
With your Express app deployed and your MemCachier Add-On provisioned, you can now use your object cache. In this step, you will implement three object caching techniques. You will begin by caching resource-intensive computation to improve usage speeds and efficiency. Then, you will implement techniques to cache rendered views after user input to improve request handling and to cache short-lived sessions in anticipation of scaling your app beyond this tutorial.
In this section, you’ll cache resource-intensive computations to speed up your app, which results in more efficient CPU use. The findPrime
function is a resource-intensive computation, when a large enough number is submitted. You’ll cache its result and serve that cached value when available instead of repeating the calculation.
First, open server.js
to add the memcache client:
const express = require('express');
const findPrime = require('./utils/findPrime');
const memcache = require('./services/memcache');
...
Then, store a calculated prime number in the cache:
...
const prime = findPrime(n);
const key = 'prime_' + n;
memcache.set(key, prime.toString(), { expires: 0 }, (err) => {
if (err) console.log(err);
});
...
Save the file.
The set
method takes a key as its first parameter and a value of a string as its second, so you convert the prime
number to a string. The third options
argument ensures the stored item never expires. The fourth and final parameter is an optional callback, where an error is thrown if present.
Note: As a best practice, cache errors should be handled gracefully. A cache is an enhancement and should generally not crash an app on failure. An app can work perfectly fine, albeit slower, without its cache.
Note: At this point, your app will continue to work locally but without caching. An error will be output when memcache.set
is called, because it will not be able to find a cache server:
OutputMemJS: Server <localhost:11211> failed after (2) retries with error - connect ECONNREFUSED 127.0.0.1:11211
Error: No servers available
...
For the rest of this tutorial, you don’t need local caching. If you want it to work, you could run memcached at localhost:11211
, which is the memjs
default.
Next, stage and commit your changes:
- git add . && git commit -m "Add memjs client and cache prime number"
Then, push these changes to GitHub, which should automatically deploy to App Platform:
- git push
Your App Platform dashboard will shift from the Deployed message to one that indicates your app is building. When the build is complete, open the app in your browser and submit a number to find its biggest prime.
Note: Your dashboard may display a Waiting for service
message. That message will typically resolve by itself. If it lingers, try refreshing your app to check if the build has deployed.
Next, return to the Add-Ons dashboard, then click the View MemCachier option for your named service to view your cache’s analytics dashboard.
On this dashboard, the Set Cmds option on the All Time Stats board and the Items stats on the Storage board have both increased by 1
. Each time you submit a number, Set Cmds and Items will both increase. You must press the Refresh button to load any new stats.
Note: Checking your app’s logs on App Platform can be valuable for debugging. From your app’s dashboard, click Runtime Logs to view them.
With items stored in the cache, you can make use of them. You’ll now check if an item is cached, and if so, you’ll serve it from the cache; otherwise, you’ll find the prime number as before.
Back in server.js
, update your file with the highlighted lines. You will both modify existing lines and add new lines for the cache:
...
app.get('/', (req, res) => {
const n = req.query.n;
if (!n) {
res.render('index');
return;
}
let prime;
const key = 'prime_' + n;
memcache.get(key, (err, val) => {
if (err) console.log(err);
if (val !== null) {
// Use the value from the cache
// Convert Buffer string before converting to number
prime = parseInt(val.toString());
} else {
// No cached value available, find it
prime = findPrime(n);
memcache.set(key, prime.toString(), { expires: 0 }, (err) => {
if (err) console.log(err);
});
}
// Initialize likes for this number when necessary
if (!likesMap[n]) likesMap[n] = 0;
const locals = { n, prime, likes: likesMap[n] };
res.render('index', locals);
});
});
...
Save the file.
This code initializes prime
without a value, using the let
keyword, as its value is now reassigned. Then memcache.get
attempts to retrieve the cached prime number. Most of the controller’s code now lives in the memcache.get
callback because its result is required to determine how to handle the request. If a cached value is available, use it. Otherwise, do the computation to find the prime number and store the result in the cache as before.
The value returned in the memcache.get
callback is a Buffer
, so you convert it to a string before converting prime
back into a number.
Commit your changes and push them to GitHub to deploy:
- git add . && git commit -m "Check cache for prime number" && git push
When you submit a number not yet cached to your app, the Set Cmds, Items, and get misses stats on your MemCachier dashboard will increase by 1
. The miss occurs because you try to get the item from the cache before setting it. The item is not in the cache, resulting in a miss, after which the item gets stored. When you submit a cached number, get hits will increment.
You are now caching resource-intensive computations. Next, you’ll cache your app’s rendered views.
In this section, you’ll cache the views rendered by your Express app with middleware. Earlier, you set up ejs
as a template engine and created a template to render views for each number N
submitted. Rendered views can be resource-intensive to create, so caching them can speed up request handling and use fewer resources.
To begin, create a middleware
directory. Then, create the file middleware/cacheView.js
and open it in your editor. In cacheView.js
, add these lines for the middleware function:
const memcache = require('../services/memcache');
/**
* Express middleware to cache views and serve cached views
*/
module.exports = function (req, res, next) {
const key = `view_${req.url}`;
memcache.get(key, (err, val) => {
if (err) console.log(err);
if (val !== null) {
// Convert Buffer string to send as the response body
res.send(val.toString());
return;
}
});
};
You first import the memcache
client. Then, you declare a key, such as view_/?n=100
. Next, you check if a view for that key is in the cache with memcache.get
. If there is no error and a value exists for that key, there’s nothing more to do, so the request finishes by sending the view back to the client.
Next, if a view is not cached, you want to cache it. To do this, override the res.send
method by adding the highlighted lines:
...
module.exports = function (req, res, next) {
const key = `view_${req.url}`;
memcache.get(key, (err, val) => {
if (err) console.log(err);
if (val !== null) {
// Convert Buffer to UTF-8 string to send as the response body
res.send(val.toString());
return;
}
const originalSend = res.send;
res.send = function (body) {
memcache.set(key, body, { expires: 0 }, (err) => {
if (err) console.log(err);
});
originalSend.call(this, body);
};
});
};
You override the res.send
method with a function that stores the view in the cache before calling the original send
function as usual. You invoke the original send
function with call
, which sets its this
context to what it would have been if not overridden. Make sure to use an anonymous function expression (not an arrow function), so the correct this
value will be specified.
Then, pass control to the next middleware by adding the highlighted line:
...
/**
* Express middleware to cache views and serve cached views
*/
module.exports = function (req, res, next) {
const key = `view_${req.url}`;
memcache.get(key, (err, val) => {
if (err) console.log(err);
if (val !== null) {
// Convert Buffer to UTF-8 string to send as the response body
res.send(val.toString());
return;
}
const originalSend = res.send;
res.send = function (body) {
memcache.set(key, body, { expires: 0 }, (err) => {
if (err) console.log(err);
});
originalSend.call(this, body);
};
next();
});
};
...
Calling next
invokes the next middleware function in the app. In your case, there is no other middleware, so the controller is called. The res.render
method for Express renders a view, then calls res.send
internally with that rendered view. So now, in the controller for the home route, your override function is called when res.render
is called, storing the view in the cache before finally calling the original send
function to complete the response.
Save the file.
Note: You can also pass a callback to the render
method in the controller, but you will have to duplicate the view caching code in the controller for each route being cached.
Now import the view caching middleware into server.js
:
const express = require('express');
const findPrime = require('./utils/findPrime');
const memcache = require('./services/memcache');
const cacheView = require('./middleware/cacheView');
...
Add the highlighted code to use it with the GET /
home route:
...
app.get('/', cacheView, (req, res) => {
...
});
...
Save the file.
Then commit your changes and push them to GitHub to deploy:
- git add . && git commit -m "Add view caching" && git push
Everything should work as usual when you submit a number in your app. If you submit a new number, the MemCachier dashboard stats for Set Cmds, Items, and get misses all increase by two: once for the prime number calculation and once for the view. If you refresh the app with the same number, you’ll see a single get hit added to the MemCachier dashboard. The view is retrieved successfully from the cache, so there is no need to fetch the prime number result.
Note: The Express application setting view cache
is enabled by default in production. This view cache does not cache the contents of the template’s output, only the underlying template itself. The view is re-rendered with every request, even when the cache is on. As such, it’s different but complementary to the rendered view caching you implemented.
Now that you are caching views, you may notice that the Like button no longer works. If you were to log the likes
value, the value will indeed change. However, the cached view still needs to be updated when the number of likes changes. A cached view needs to be invalidated when the view changes.
Next, when likes
changes, you’ll invalidate the cached view by deleting it from the cache. Back in server.js
, update the redirect
function and add the highlighted lines:
...
app.get('/like', (req, res) => {
const n = req.query.n;
if (!n) {
res.redirect('/');
return;
}
likesMap[n]++;
// The URL of the page being 'liked'
const url = `/?n=${n}`;
res.redirect(url);
});
...
The likes count for this view has changed, so the cached version will be invalid. Add the highlighted lines to delete the likes count from the cache when likes
change:
...
const url = `/?n=${n}`;
// The view for this URL has changed, so the cached version is no longer valid, delete it from the cache.
const key = `view_${url}`;
memcache.delete(key, (err) => {
if (err) console.log(err);
});
res.redirect(url);
...
Your server.js
file should now match the following:
const express = require('express');
const findPrime = require('./utils/findPrime');
const memcache = require('./services/memcache');
const cacheView = require('./middleware/cacheView');
const app = express();
app.set('view engine', 'ejs');
/**
* Key is `n`
* Value is the number of 'likes' for `n`
*/
const likesMap = {};
app.get('/', cacheView, (req, res) => {
const n = req.query.n;
if (!n) {
res.render('index');
return;
}
let prime;
const key = 'prime_' + n;
memcache.get(key, (err, val) => {
if (err) console.log(err);
if (val !== null) {
// Use the value from the cache
// Convert Buffer string before converting to number
prime = parseInt(val.toString());
} else {
// No cached value available, find it
prime = findPrime(n);
memcache.set(key, prime.toString(), { expires: 0 }, (err) => {
if (err) console.log(err);
});
}
// Initialize likes for this number when necessary
if (!likesMap[n]) likesMap[n] = 0;
const locals = { n, prime, likes: likesMap[n] };
res.render('index', locals);
});
});
app.get('/like', (req, res) => {
const n = req.query.n;
if (!n) {
res.redirect('/');
return;
}
likesMap[n]++;
// The URL of the page being 'liked'
const url = `/?n=${n}`;
// The view for this URL has changed, so the cached version is no longer valid, delete it from the cache.
const key = `view_${url}`;
memcache.delete(key, (err) => {
if (err) console.log(err);
});
res.redirect(url);
});
const port = process.env.PORT || 3000;
app.listen(port, () =>
console.log(`Example app is listening on port ${port}.`)
);
Save the file.
Commit and push changes to deploy:
- git add . && git commit -m "Delete invalid cached view" && git push
The Like button on your app will now work. The following stats will change on your MemCachier dashboard when a view is liked:
You have implemented rendered view caching and invalidated cached views when they change. The final strategy you will implement is session caching.
In this section, you’ll add and cache sessions in your Express app, making your cache the session store. A common use case for sessions is user logins, so you can consider this section on caching sessions as a preliminary step for implementing a user login system in the future (though the user login system is beyond the scope of this tutorial). Storing short-lived sessions in a cache can be faster and more scalable than storing in many databases.
Note: A cache is ideal for storing short-lived sessions that time out. However, caches are not persistent; long-lived sessions are better suited to permanent storage solutions like databases.
Install the express-session
tool to add sessions to your Express app and connect-memjs
to enable the use of your MemCachier cache as the session store:
- npm install express-session connect-memjs
In server.js
, import express-session
and connect-memjs
:
const express = require('express');
const findPrime = require('./utils/findPrime');
const memcache = require('./services/memcache');
const cacheView = require('./middleware/cacheView');
const session = require('express-session');
const MemcacheStore = require('connect-memjs')(session);
...
Save the file.
The session middleware is passed to the connect
memcached module, allowing it to inherit from express.session.Store
.
Still in server.js
, configure the session middleware to use your cache as its store. Add the highlighted lines:
...
app.set('view engine', 'ejs');
app.use(
session({
secret: 'your-session-secret',
resave: false,
saveUninitialized: true,
store: new MemcacheStore({
servers: [process.env.MEMCACHIER_SERVERS],
prefix: 'session_',
}),
})
);
...
The secret
is used to sign the session cookie. Be sure to update your-session-secret
with a unique string.
Note: You should use an environment variable to set your secret for production setups. To do that, you can set the secret with secret: process.env.SESSION_SECRET || 'your-session-secret'
, though you would also need to set the environment variable in your App Platform dashboard.
resave
forces the session to resave if unmodified during a request. You don’t want to store the item in the cache again unnecessarily, so you set it to false
.
saveUninitialized: false
is useful when you only want to save modified sessions, as is often the case with login sessions where a user property might be added to the session after authentication. In this case, you will store all sessions indiscriminately, so you set it to true
.
Finally, set store
to your cache, setting the prefix for session cache keys to session_
. That means the key for a session item in the cache will look like session_<session ID>
.
Next, add some app-level debugging middleware with the highlighted lines, which will help identify the cached sessions in action:
...
app.use(
session({
...
})
);
/**
* Session sanity check middleware
*/
app.use(function (req, res, next) {
console.log('Session ID:', req.session.id);
// Get the item from the cache
memcache.get(`session_${req.session.id}`, (err, val) => {
if (err) console.log(err);
if (val !== null) {
console.log('Session from cache:', val.toString());
}
});
next();
});
...
That middleware will log the session ID for each request. It then gets the session for that ID from the cache and logs its contents. This approach demonstrates that sessions are working and being cached.
Save the file, then commit and push your changes to deployment.
- git add . && git commit -m "Add session caching" && git push
In your app, submit a number and then check the Runtime Logs in your App Platform dashboard to access your debugging messages. You will find the session ID and value that you logged, demonstrating that sessions are working and being cached.
On your MemCachier dashboard, once a view and session are cached, you’ll see 3
get hits for every page refresh: 1
for the view, 1
for the session, and 1
for getting the session in the debugging middleware.
You have now implemented session caching. You can stop here, or you can clean up your app in the optional final step.
The app you have deployed in this tutorial will incur charges, so you can optionally destroy the app and the MemCachier Add-On when you have finished working with them.
From the app’s dashboard, click Actions, then Destroy App.
To clean up your MemCachier Add-On, click Add-Ons, then the name of your MemCachier Add-On. Next, click on Settings and Destroy. A free MemCachier cache will be deactivated after 30 days of inactivity, but it is a good practice to clean up your tools.
In this tutorial, you created an Express app to find a prime number with a Like button. You then pushed that app to GitHub and deployed it on DigitalOcean App Platform. Finally, you made the Express app faster and more scalable by implementing three object caching techniques with the MemCachier Add-On for caching resource-intensive computations, rendered views, and sessions. You can review all the files for this tutorial in the DigitalOcean Community repository.
In each caching strategy you implemented, keys had a prefix: prime_
, view_
and session_
. In addition to the namespace advantage, the prefix offers the additional benefit of allowing you to profile cache performance. You used the MemCachier developer plan in this tutorial, but you can also try a fully managed plan that comes with the Introspection feature set, enabling you to track the performance of individual prefixes. For example, you could monitor any prefix’s hit rate or hit ratio, providing detailed insight into your cache’s performance. To continue working with MemCachier, you can review their documentation.
To keep building with DigitalOcean App Platform, try our App Platform How-To guides and read further in our App Platform documentation.
]]>Server is node/express.
How do I go about fixing this? I tried profiling my app with --inspect, I also tried the 3rd party tool “Clinic” with different views, I tried looking at my google analytics, but still was unable to find anything useful.
All my resources seem to spike at the same time. https://i.imgur.com/sLD2Iog.png
How do i even start thinking about how to solve this problem?
]]>GraphQL is a query language for APIs that consists of a schema definition language and a query language, which allows API consumers to fetch only the data they need to support flexible querying. GraphQL enables developers to evolve the API while meeting the different needs of multiple clients, for example iOS, Android, and web variants of an app. Moreover, the GraphQL schema adds a degree of type safety to the API while also serving as a form of documentation for your API.
Prisma is an open-source database toolkit with three main tools:
Prisma facilitates working with databases for application developers who want to focus on implementing value-adding features instead of spending time on complex database workflows (such as schema migrations or writing complicated SQL queries).
In this tutorial, you will use GraphQL and Prisma in combination as their responsibilities complement each other. GraphQL provides a flexible interface to your data for use in clients, such as frontends and mobile apps—GraphQL isn’t tied to any specific database. This is where Prisma comes in to handle the interaction with the database where your data will be stored.
DigitalOcean’s App Platform provides a seamless way to deploy applications and provision databases in the cloud without worrying about infrastructure. This reduces the operational overhead of running an application in the cloud; especially with the ability to create a managed PostgreSQL database with daily backups and automated failover. App Platform has native Node.js support streamlining deployment.
You’ll build a GraphQL API for a blogging application in JavaScript using Node.js. You will first use Apollo Server to build the GraphQL API backed by in-memory data structures. You will then deploy the API to the DigitalOcean App Platform. Finally you will use Prisma to replace the in-memory storage and persist the data in a PostgreSQL database and deploy the application again.
At the end of the tutorial, you will have a Node.js GraphQL API deployed to DigitalOcean, which handles GraphQL requests sent over HTTP and performs CRUD operations against the PostgreSQL database.
You can find the code for this project in the DigitalOcean Community respository.
Before you begin this guide you’ll need the following:
Basic familiarity with JavaScript, Node.js, GraphQL, and PostgreSQL is helpful, but not strictly required for this tutorial.
In this step, you will set up a Node.js project with npm and install the dependencies apollo-server
and graphql
. This project will be the foundation for the GraphQL API that you’ll build and deploy throughout this tutorial.
First, create a new directory for your project:
- mkdir prisma-graphql
Next, navigate into the directory and initialize an empty npm project:
- cd prisma-graphql
- npm init --yes
This command creates a minimal package.json
file that is used as the configuration file for your npm project.
You will receive the following output:
OutputWrote to /Users/your_username/workspace/prisma-graphql/package.json:
{
"name": "prisma-graphql",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
You’re now ready to configure TypeScript in your project.
Install the necessary dependencies:
- npm install apollo-server graphql --save
This command installs two packages as dependencies in your project:
apollo-server
is the HTTP library that you use to define how GraphQL requests are resolved and how to fetch data.graphql
is the library you’ll use to build the GraphQL schema.You’ve created your project and installed the dependencies. In the next step, you will define the GraphQL schema.
In this step, you will define the GraphQL schema and corresponding resolvers. The schema will define the operations that the API can handle. The resolvers will define the logic for handling those requests using in-memory data structures, which you will replace with database queries in the next step.
First, create a new directory called src
that will contain your source files:
- mkdir src
Then run the following command to create the file for the schema:
- nano src/schema.js
Add the following code to the file:
const { gql } = require('apollo-server')
const typeDefs = gql`
type Post {
content: String
id: ID!
published: Boolean!
title: String!
}
type Query {
feed: [Post!]!
post(id: ID!): Post
}
type Mutation {
createDraft(content: String, title: String!): Post!
publish(id: ID!): Post
}
`
You define the GraphQL schema using the gql
tagged template. A schema is a collection of type definitions (hence typeDefs
) that together define the shape of queries that can be executed against your API. This will convert the GraphQL schema string into the format that Apollo expects.
The schema introduces three types:
Post
defines the type for a post in your blogging app and contains four fields where each field is followed by its type: for example, String
.Query
defines the feed
query which returns multiple posts as denoted by the square brackets and the post
query which accepts a single argument and returns a single Post
.Mutation
defines the createDraft
mutation for creating a draft Post
and the publish
mutation which accepts an id
and returns a Post
.Every GraphQL API has a query type and may or may not have a mutation type. These types are the same as a regular object type, but they are special because they define the entry point of every GraphQL query.
Next, add the posts
array to the src/schema.js
file, below the typeDefs
variable:
...
const posts = [
{
id: 1,
title: 'Subscribe to GraphQL Weekly for community news ',
content: 'https://graphqlweekly.com/',
published: true,
},
{
id: 2,
title: 'Follow DigitalOcean on Twitter',
content: 'https://twitter.com/digitalocean',
published: true,
},
{
id: 3,
title: 'What is GraphQL?',
content: 'GraphQL is a query language for APIs',
published: false,
},
]
You define the posts
array with three pre-defined posts. The structure of each post
object matches the Post
type you defined in the schema. This array holds the posts that will be served by the API. In a subsequent step, you will replace the array once the database and Prisma Client are introduced.
Next, define the resolvers
object by adding the following code below the posts
array you just defined:
...
const resolvers = {
Query: {
feed: (parent, args) => {
return posts.filter((post) => post.published)
},
post: (parent, args) => {
return posts.find((post) => post.id === Number(args.id))
},
},
Mutation: {
createDraft: (parent, args) => {
posts.push({
id: posts.length + 1,
title: args.title,
content: args.content,
published: false,
})
return posts[posts.length - 1]
},
publish: (parent, args) => {
const postToPublish = posts.find((post) => post.id === Number(args.id))
postToPublish.published = true
return postToPublish
},
},
Post: {
content: (parent) => parent.content,
id: (parent) => parent.id,
published: (parent) => parent.published,
title: (parent) => parent.title,
},
}
module.exports = {
resolvers,
typeDefs,
}
You define the resolvers following the same structure as the GraphQL schema. Every field in the schema’s types has a corresponding resolver function whose responsibility is to return the data for that field in your schema. For example, the Query.feed()
resolver will return the published posts by filtering the posts
array.
Resolver functions receive four arguments:
parent
is the return value of the previous resolver in the resolver chain. For top-level resolvers, the parent is undefined
, because no previous resolver is called. For example, when making a feed
query, the query.feed()
resolver will be called with parent
’s value undefined
and then the resolvers of Post
will be called where parent
is the object returned from the feed
resolver.args
carries the parameters for the query. For example, the post
query, will receive the id
of the post to be fetched.context
is an object that gets passed through the resolver chain that each resolver can write to and read from, which allows the resolvers to share information.info
is an AST representation of the query or mutation. You can read more about the details in this Prisma series on GraphQL Basics.Since context
and info
are not necessary in these resolvers, only parent
and args
are defined.
Save and exit the file once you’re done.
Note: When a resolver returns the same field as the resolver’s name, like the four resolvers for Post
, Apollo Server will automatically resolve those. This means you don’t have to explicitly define those resolvers.
- Post: {
- content: (parent) => parent.content,
- id: (parent) => parent.id,
- published: (parent) => parent.published,
- title: (parent) => parent.title,
- },
You export the schema and resolvers so that you can use them in the next step to instantiate the server with Apollo Server.
In this step, you will create the GraphQL server with Apollo Server and bind it to a port so that the server can accept connections.
First, run the following command to create the file for the server:
- nano src/server.js
Add the following code to the file:
const { ApolloServer } = require('apollo-server')
const { resolvers, typeDefs } = require('./schema')
const port = process.env.PORT || 8080
new ApolloServer({ resolvers, typeDefs }).listen({ port }, () =>
console.log(`Server ready at: http://localhost:${port}`),
)
Here, you instantiate the server and pass the schema and resolvers from the previous step.
The port the server will bind to is set from the PORT
environment variable. If not set, it will default to 8080
. The PORT
environment variable will be automatically set by App Platform and will ensure your server can accept connections once deployed.
Save and exit the file.
Your GraphQL API is ready to run. Start the server with the following command:
- node src/server.js
You will receive the following output:
OutputServer ready at: http://localhost:8080
It’s considered good practice to add a start script to your package.json
file so that the entry point to your server is clear. Doing so will allow App Platform to start the server once deployed.
First, stop the server by pressing CTRL+C
. Then, to add a start script, open the package.json
file:
- nano package.json
Add the highlighted text to the "scripts"
object in package.json
:
{
"name": "prisma-graphql",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node ./src/server.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"apollo-server": "^3.11.1",
"graphql": "^16.6.0"
}
}
Save and exit the file.
Now you can start the server with the following command:
- npm start
You will receive the following output:
Output> prisma-graphql@1.0.0 start
> node ./src/server.js
Server ready at: http://localhost:8080
To test the GraphQL API, open the URL from the output, which will lead you to the Apollo GraphQL Studio. Click the Query Your Server button on the home page to interact with the IDE.
The Apollo GraphQL Studio is an IDE where you can test the API by sending queries and mutations.
For example, to test the feed
query, which only returns published posts, enter the following query to the left side of the IDE and send the query by pressing the Run or play button:
query {
feed {
id
title
content
published
}
}
The response will display a title of Subscribe to GraphQL Weekly
with its URL and Follow DigitalOcean on Twitter
with its URL.
Click on the +
button on the bar above your previous query to create a new tab. Then, to test the createDraft
mutation, enter the following mutation:
mutation {
createDraft(title: "Deploying a GraphQL API to DigitalOcean") {
id
title
content
published
}
}
After you submit the mutation using the play button, you will receive a response with Deploying a GraphQL API to DigitalOcean
within the title
field as part of the response.
Note: You can choose which fields to return from the mutation by adding or removing fields within the curly braces ({}
) following createDraft
. For example, if you wanted to only return the id
and title
you could send the following mutation:
mutation {
createDraft(title: "Deploying a GraphQL API to DigitalOcean") {
id
title
}
}
You have successfully created and tested the GraphQL server. In the next step, you will create a GitHub repository for the project.
In this step, you will create a GitHub repository for your project and push your changes so that the GraphQL API can be automatically deployed from GitHub to App Platform.
First, stop the development server by pressing CTRL+C
. Then initialize a repository from the prisma-graphql
folder using the following command:
- git init
Next, use the following two commands to commit the code to the repository:
- git add src package-lock.json package.json
- git commit -m 'Initial commit'
Now that the changes have been committed to your local repository, you will create a repository in GitHub and push your changes.
Go to GitHub to create a new repository. For consistency, name the repository prisma-graphql and then click Create repository.
After the repository is created, push the changes with the following commands, which includes renaming the default local branch to main
:
- git remote add origin git@github.com:your_github_username/prisma-graphql.git
- git branch -M main
- git push --set-upstream origin main
You have successfully committed and pushed the changes to GitHub. Next, you will connect the repository to App Platform and deploy the GraphQL API.
In this step, you will connect the GitHub repository you just created to DigitalOcean and then configure App Platform so that the GraphQL API can be automatically deployed when you push changes to GitHub.
First, visit the App Platform page in the DigitalOcean Cloud Console and click on the Create App button.
You will see service provider options with GitHub as the default.
If you have not configured DigitalOcean to your GitHub account, click on the Manage Access button to be redirected to GitHub.
You can select all repositories or specific repositories. Click Install & Authorize, then you will be redirected back to the DigitalOcean App Platform creation.
Choose the repository your_github_username/prisma-graphql
and click Next. Autodeploy is selected by default, and you can leave it selected for consistency in redeploys.
On the Resources page, click the Edit Plan button to choose a suitable plan. Select the Basic plan with the plan size you need (this tutorial will use the $5.00/mo - Basic plan).
Then press the Back to return to the creation page.
If you press the pen icon next to your project name, you can customize the configuration for the app. The Application Settings page will open:
Ensure that the Run Command is set as npm start
. By default, App Platform will set the HTTP port to 8080
, which is the same port that you’ve configured your GraphQL server to bind to.
When you have finished customizing the configuration, press the Back button to return to the setup page. Then, press the Next button to move to the Environment Variables page.
Your environment variables will not need further configuration at the moment. Click the Next button.
On the Info page, you can adjust App Details and Location. Edit your app information to choose the region you want to deploy your app to. Confirm your app details by pressing the Save button. Then, click the Next button.
You will be able to review all of your selected options on the Review page. Then click Create Resources. You will be redirected to the app page, where you will see the progress of the initial deployment.
Once the build finishes, you will get a notification indicating that your app is deployed.
You can now visit your deployed GraphQL API at the URL below the app’s name in your DigitalOcean Console. It will be linked via the ondigitalocean.app
subdomain. When you open the URL, the GraphQL Playground will open the same way as it did in Step 3 of this tutorial.
You have successfully connected your repository to App Platform and deployed your GraphQL API. Next you will evolve your app and replace the in-memory data of the GraphQL API with a database.
So far, you have built a GraphQL API using the in-memory posts
array to store data. If your server restarts, all changes to the data will be lost. To ensure that your data is safely persisted, you will replace the posts
array with a PostgreSQL database and use Prisma to access the data.
In this step, you will install the Prisma CLI, create your initial Prisma schema (the main configuration file for your Prisma setup, containing your database schema), set up PostgreSQL locally with Docker, and connect Prisma to it.
Begin by installing the Prisma CLI with the following command:
- npm install --save-dev prisma
The Prisma CLI will help with database workflows such as running database migrations and generating Prisma Client.
Next, you’ll set up your PostgreSQL database using Docker. Create a new Docker Compose file with the following command:
- nano docker-compose.yml
Add the following code to the newly created file:
version: '3.8'
services:
postgres:
image: postgres:14
restart: always
environment:
- POSTGRES_USER=test-user
- POSTGRES_PASSWORD=test-password
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
This Docker Compose configuration file is responsible for starting the official PostgreSQL Docker image on your machine. The POSTGRES_USER
and POSTGRES_PASSWORD
environment variables set the credentials for the superuser (a user with admin privileges). You will also use these credentials to connect Prisma to the database. Replace the test-user
and test-password
with your user credentials.
Finally, you define a volume where PostgreSQL will store its data and bind the 5432
port on your machine to the same port in the Docker container.
Save and exit the file.
With this setup in place, you can launch the PostgreSQL database server with the following command:
- docker-compose up -d
It may take a few minutes to load.
You can verify that the database server is running with the following command:
- docker ps
This command will output something similar to:
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
198f9431bf73 postgres:10.3 "docker-entrypoint.s…" 45 seconds ago Up 11 seconds 0.0.0.0:5432->5432/tcp prisma-graphql_postgres_1
With the PostgreSQL container running, you can now create your Prisma setup. Run the following command from the Prisma CLI:
- npx prisma init
As a best practice, all invocations of the Prisma CLI should be prefixed with npx
to ensure it uses your local installation.
An output like this will print:
Output✔ Your Prisma schema was created at prisma/schema.prisma
You can now open it in your favorite editor.
Next steps:
1. Set the DATABASE_URL in the .env file to point to your existing database. If your database has no tables yet, read https://pris.ly/d/getting-started
2. Set the provider of the datasource block in schema.prisma to match your database: postgresql, mysql, sqlite, sqlserver, mongodb or cockroachdb.
3. Run prisma db pull to turn your database schema into a Prisma schema.
4. Run prisma generate to generate the Prisma Client. You can then start querying your database.
More information in our documentation:
https://pris.ly/d/getting-started
After running the command, the Prisma CLI generates a dotenv file named .env
in the project folder to define your database connection URL, as well as a new nested folder called prisma
that contains the schema.prisma
file. This is the main configuration file for your Prisma project (in which you will include your data model).
To make sure Prisma knows about the location of your database, open the .env
file:
- nano .env
Adjust the DATABASE_URL
environment variable with your user credentials:
DATABASE_URL="postgresql://test-user:test-password@localhost:5432/my-blog?schema=public"
You use the database credentials test-user
and test-password
, which are specified in the Docker Compose file. If you modified the credentials in your Docker Compose file, be sure to update this line to match the credentials in that file. To learn more about the format of the connection URL, visit the Prisma docs.
You have successfully started PostgreSQL and configured Prisma using the Prisma schema. In the next step, you will define your data model for the blog and use Prisma Migrate to create the database schema.
Now you will define your data model in the Prisma schema file you’ve just created. This data model will then be mapped to the database with Prisma Migrate, which will generate and send the SQL statements for creating the tables that correspond to your data model.
Since you’re building a blog, the main entities of the application will be users and posts. In this step, you will define a Post
model with a similar structure to the Post
type in the GraphQL schema. In a later step, you will evolve the app and add a User
model.
Note: The GraphQL API can be seen as an abstraction layer for your database. When building a GraphQL API, it’s common for the GraphQL schema to closely resemble your database schema. However, as an abstraction, the two schemas won’t necessarily have the same structure, thereby allowing you to control which data you want to expose over the API as some data might be considered sensitive or irrelevant for the API layer.
Prisma uses its own data modeling language to define the shape of your application data.
Open your schema.prisma
file from the project’s folder where package.json
is located:
- nano prisma/schema.prisma
Note: You can verify from the terminal in which folder you are with the pwd
command, which will output the current working directory. Additionally, listing the files with the ls
command will help you navigate your file system.
Add the following model definitions to it:
...
model Post {
id Int @default(autoincrement()) @id
title String
content String?
published Boolean @default(false)
}
You define a model called Post
with a number of fields. The model will be mapped to a database table; the fields represent the individual columns.
The id
fields have the following field attributes:
@default(autoincrement())
sets an auto-incrementing default value for the column.@id
sets the column as the primary key for the table.Save and exit the file.
With the model in place, you can now create the corresponding table in the database using Prisma Migrate with the migrate dev
command to create and run the migration files.
In your terminal, run the following command:
- npx prisma migrate dev --name init --skip-generate
This command creates a new migration on your file system and runs it against the database to create the database schema. The --name init
flag specifies the name of the migration (will be used to name the migration folder that’s created on your file system). The --skip-generate
flag skips generating Prisma Client (this will be done in the next step).
This command will output something similar to:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "my-blog", schema "public" at "localhost:5432"
PostgreSQL database my-blog created at localhost:5432
Applying migration `20201201110111_init`
The following migration(s) have been created and applied from new schema changes:
migrations/
└─ 20201201110111_init/
└─ migration.sql
Your database is now in sync with your schema.
Your prisma/migrations
directory is now populated with the SQL migration file. This approach allows you to track changes to the database schema and create the same database schema in production.
Note: If you’ve already used Prisma Migrate with the my-blog
database and there is an inconsistency between the migrations in the prisma/migration
folder and the database schema, you will be prompted to reset the database with the following output:
Output? We need to reset the PostgreSQL database "my-blog" at "localhost:5432". All data will be lost.
Do you want to continue? › (y/N)
You can resolve this by entering y
which will reset the database. Beware that this will cause all data in the database to be lost.
You’ve now created your database schema. In the next step, you will install Prisma Client and use it in your GraphQL resolvers.
Prisma Client is an auto-generated and type-safe Object Relational Mapper (ORM) that you can use to programmatically read and write data in a database from a Node.js application. In this step, you’ll install Prisma Client in your project.
In your terminal, install the Prisma Client npm package:
- npm install @prisma/client
Note: Prisma Client provides rich auto-completion by generating code based on your Prisma schema to the node_modules
folder. To generate the code, you use the npx prisma generate
command. This is typically done after you create and run a new migration. On the first install, however, this is not necessary as it will automatically be generated for you in a postinstall
hook.
After creating the database and GraphQL schema and installing Prisma Client, you will now use Prisma Client in the GraphQL resolvers to read and write data in the database. You’ll do this by replacing the posts
array, which you’ve used so far to hold your data.
Create and open the following file:
- nano src/db.js
Add the following lines to the new file:
const { PrismaClient } = require('@prisma/client')
module.exports = {
prisma: new PrismaClient(),
}
This code imports Prisma Client, creates an instance of it, and exports the instance that you’ll use in your resolvers.
Now save and close the src/db.js
file.
Next, you will import the prisma
instance into src/schema.js
. To do so, open src/schema.js
:
- nano src/schema.js
Add this line to import prisma
from ./db
at the top of the file:
const { prisma } = require('./db')
...
Then remove the posts
array by deleting the lines that are marked with the hyphen symbol (-
):
...
-const posts = [
- {
- id: 1,
- title: 'Subscribe to GraphQL Weekly for community news ',
- content: 'https://graphqlweekly.com/',
- published: true,
- },
- {
- id: 2,
- title: 'Follow DigitalOcean on Twitter',
- content: 'https://twitter.com/digitalocean',
- published: true,
- },
- {
- id: 3,
- title: 'What is GraphQL?',
- content: 'GraphQL is a query language for APIs',
- published: false,
- },
-]
...
You will next update the Query
resolvers to fetch published posts from the database. First, delete the existing lines in the resolvers.Query
, then update the object by adding the highlighted lines:
...
const resolvers = {
Query: {
feed: (parent, args) => {
return prisma.post.findMany({
where: { published: true },
})
},
post: (parent, args) => {
return prisma.post.findUnique({
where: { id: Number(args.id) },
})
},
},
...
Here, you use two Prisma Client queries:
findMany
fetches posts whose publish
field is false
.findUnique
fetches a single post whose id
field equals the id
GraphQL argument.Per the GraphQL specification, the ID
type is serialized the same way as a String
. Therefore you convert to a Number
because the id
in the Prisma schema is an int
.
Next, you will update the Mutation
resolver to save and update posts in the database. First, delete the code in the resolvers.Mutation
object and the Number(args.id)
lines, then add the highlighted lines:
const resolvers = {
...
Mutation: {
createDraft: (parent, args) => {
return prisma.post.create({
data: {
title: args.title,
content: args.content,
},
})
},
publish: (parent, args) => {
return prisma.post.update({
where: {
id: Number(args.id),
},
data: {
published: true,
},
})
},
},
}
You’re using two Prisma Client queries:
create
to create a Post
record.update
to update the published field of the Post
record whose id
matches the one in the query argument.Finally, remove the resolvers.Post
object:
...
-Post: {
- content: (parent) => parent.content,
- id: (parent) => parent.id,
- published: (parent) => parent.published,
- title: (parent) => parent.title,
-},
...
Your schema.js
should now read as follows:
const { gql } = require('apollo-server')
const { prisma } = require('./db')
const typeDefs = gql`
type Post {
content: String
id: ID!
published: Boolean!
title: String!
}
type Query {
feed: [Post!]!
post(id: ID!): Post
}
type Mutation {
createDraft(content: String, title: String!): Post!
publish(id: ID!): Post
}
`
const resolvers = {
Query: {
feed: (parent, args) => {
return prisma.post.findMany({
where: { published: true },
})
},
post: (parent, args) => {
return prisma.post.findUnique({
where: { id: Number(args.id) },
})
},
},
Mutation: {
createDraft: (parent, args) => {
return prisma.post.create({
data: {
title: args.title,
content: args.content,
},
})
},
publish: (parent, args) => {
return prisma.post.update({
where: {
id: Number(args.id),
},
data: {
published: true,
},
})
},
},
}
module.exports = {
resolvers,
typeDefs,
}
Save and close the file.
Now that you’ve updated the resolvers to use Prisma Client, start the server to test the flow of data between the GraphQL API and the database with the following command:
- npm start
Once again, you will receive the following output:
OutputServer ready at: http://localhost:8080
Open the Apollo GraphQL Studio at the address from the output and test the GraphQL API using the same queries from Step 3.
Now you will commit your changes so that the changes can be deployed to App Platform. Stop the Apollo server with CTRL+C
.
To avoid committing the node_modules
folder and the .env
file, check the .gitignore
file in your project folder:
- cat .gitignore
Confirm that your .gitignore
file contains these lines:
node_modules
.env
If it doesn’t, update the file to match.
Save and exit the file.
Then run the following two commands to commit the changes:
- git add .
- git commit -m 'Add Prisma'
You will receive an output response like this:
Outputgit commit -m 'Add Prisma'
[main 1646d07] Add Prisma
9 files changed, 157 insertions(+), 39 deletions(-)
create mode 100644 .gitignore
create mode 100644 docker-compose.yml
create mode 100644 prisma/migrations/20201201110111_init/migration.sql
create mode 100644 prisma/migrations/migration_lock.toml
create mode 100644 prisma/schema.prisma
create mode 100644 src/db.js
You have updated your GraphQL resolvers to use the Prisma Client to make queries and mutations to your database, then committed all the changes to your remote repository. Next you’ll add a PostgreSQL database to your app in App Platform.
In this step, you will add a PostgreSQL database to your app in App Platform. Then you will use Prisma Migrate to run the migration against it so that the deployed database schema matches your local database.
First, visit the App Platform console and select the prisma-graphql project you created in Step 5.
Next, click the Create button and select Create/Attach Database from the dropdown menu, which will lead you to a page to configure your database.
Choose Dev Database, select a name, and click Create and Attach.
You will be redirected back to the Project view, where there will be a progress bar for creating the database.
After the database has been created, you will run the database migration against the production database on DigitalOcean from your local machine. To run the migration, you will need the connection string of the hosted database.
To get it, click on the db icon in the Components section of the Settings tab.
Under Connection Details, press View and then select Connection String in the dropdown menu. Copy the database URL, which will have the following structure:
postgresql://db:some_password@unique_identifier.db.ondigitalocean.com:25060/db?sslmode=require
Then, run the following command in your terminal, ensuring that you set your_db_connection_string
to the URL you just copied:
- DATABASE_URL="your_db_connection_string" npx prisma migrate deploy
This command will run the migrations against the live database with Prisma Migrate.
If the migration succeeds, you will receive the following output:
OutputPostgreSQL database db created at unique_identifier.db.ondigitalocean.com:25060
Prisma Migrate applied the following migration(s):
migrations/
└─ 20201201110111_init/
└─ migration.sql
You have successfully migrated the production database on DigitalOcean, which now matches the Prisma schema.
Note: If you receive the following error message:
OutputError: P1001: Can't reach database server at `unique_identifier.db.ondigitalocean.com`:`25060`
Navigate to the database dashboard to confirm that your database has been provisioned. You may need to update or disable the Trusted Sources for the database.
Now you can deploy your app by pushing your Git changes with the following command:
- git push
Note: App Platform will make the DATABASE_URL
environment variable available to your application at run-time. Prisma Client will use that environment variable with the env("DATABASE_URL")
in the datasource
block of your Prisma schema.
This will automatically trigger a build. If you open the App Platform console, you will have a deployment progress bar.
Once the deployment succeeds, you will receive a Deployment went live message.
You’ve now backed up your deployed GraphQL API with a database. Open the Live App, which will lead you to the Apollo GraphQL Studio. Test the GraphQL API using the same queries from Step 3.
In the final step you will evolve the GraphQL API by adding the User
model.
Your GraphQL API for blogging has a single entity named Post
. In this step, you’ll evolve the API by defining a new model in the Prisma schema and adapting the GraphQL schema to make use of the new model. You will introduce a User
model with a one-to-many relation to the Post
model, which will allow you to represent the author of posts and associate multiple posts to each user. Then you will evolve the GraphQL schema to allow the creation of users and association of posts with users through the API.
First, open the Prisma schema:
- nano prisma/schema.prisma
Add the highlighted lines to add the authorId
field to the Post
model and to define the User
model:
...
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String
posts Post[]
}
You’ve added the following items to the Prisma schema:
author
and posts
. Relation fields define connections between models at the Prisma level and do not exist in the database. These fields are used to generate the Prisma Client and to access relations with Prisma Client.authorId
field, which is referenced by the @relation
attribute. Prisma will create a foreign key in the database to connect Post
and User
.User
model to represent users.The author
field in the Post
model is optional but allows you to create posts that are not associated with a user.
Save and exit the file once you’re done.
Next, create and apply the migration locally with the following command:
- npx prisma migrate dev --name "add-user"
When the migration succeeds, you will receive the following message:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "my-blog", schema "public" at "localhost:5432"
Applying migration `20201201123056_add_user`
The following migration(s) have been created and applied from new schema changes:
migrations/
└─ 20201201123056_add_user/
└─ migration.sql
Your database is now in sync with your schema.
✔ Generated Prisma Client (4.6.1 | library) to ./node_modules/@prisma/client in 53ms
The command also generates Prisma Client so that you can make use of the new table and fields.
You will now run the migration against the production database on App Platform so that the database schema is the same as your local database. Run the following command in your terminal and set DATABASE_URL
to the connection URL from App Platform:
- DATABASE_URL="your_db_connection_string" npx prisma migrate deploy
You will receive the following output:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "db", schema "public" at "unique_identifier.db.ondigitalocean.com:25060"
2 migrations found in prisma/migrations
Applying migration `20201201123056_add_user`
The following migration have been applied:
migrations/
└─ 20201201123056_add_user/
└─ migration.sql
All migrations have been successfully applied.
You will now update the GraphQL schema and resolvers to make use of the updated database schema.
Open the src/schema.js
file:
- nano src/schema.js
Update typeDefs
with the highlighted lines as follows:
...
const typeDefs = gql`
type User {
email: String!
id: ID!
name: String
posts: [Post!]!
}
type Post {
content: String
id: ID!
published: Boolean!
title: String!
author: User
}
type Query {
feed: [Post!]!
post(id: ID!): Post
}
type Mutation {
createUser(data: UserCreateInput!): User!
createDraft(authorEmail: String, content: String, title: String!): Post!
publish(id: ID!): Post
}
input UserCreateInput {
email: String!
name: String
posts: [PostCreateWithoutAuthorInput!]
}
input PostCreateWithoutAuthorInput {
content: String
published: Boolean
title: String!
}
`
...
In this updated code, you add the following changes to the GraphQL schema:
User
type, which returns an array of Post
.author
field to the Post
type.createUser
mutation, which expects the UserCreateInput
as its input type.PostCreateWithoutAuthorInput
input type used in the UserCreateInput
input for creating posts as part of the createUser
mutation.authorEmail
optional argument to the createDraft
mutation.With the schema updated, you will now update the resolvers to match the schema.
Update the resolvers
object with the highlighted lines as follows:
...
const resolvers = {
Query: {
feed: (parent, args) => {
return prisma.post.findMany({
where: { published: true },
})
},
post: (parent, args) => {
return prisma.post.findUnique({
where: { id: Number(args.id) },
})
},
},
Mutation: {
createDraft: (parent, args) => {
return prisma.post.create({
data: {
title: args.title,
content: args.content,
published: false,
author: args.authorEmail && {
connect: { email: args.authorEmail },
},
},
})
},
publish: (parent, args) => {
return prisma.post.update({
where: { id: Number(args.id) },
data: {
published: true,
},
})
},
createUser: (parent, args) => {
return prisma.user.create({
data: {
email: args.data.email,
name: args.data.name,
posts: {
create: args.data.posts,
},
},
})
},
},
User: {
posts: (parent, args) => {
return prisma.user
.findUnique({
where: { id: parent.id },
})
.posts()
},
},
Post: {
author: (parent, args) => {
return prisma.post
.findUnique({
where: { id: parent.id },
})
.author()
},
},
}
...
The createDraft
mutation resolver now uses the authorEmail
argument (if passed) to create a relation between the created draft and an existing user.
The new createUser
mutation resolver creates a user and related posts using nested writes.
The User.posts
and Post.author
resolvers define how to resolve the posts
and author
fields when the User
or Post
are queried. These use Prisma’s Fluent API to fetch the relations.
Save and exit the file.
Start the server to test the GraphQL API:
- npm start
Begin by testing the createUser
resolver with the following GraphQL mutation:
mutation {
createUser(data: { email: "natalia@prisma.io", name: "Natalia" }) {
email
id
}
}
This mutation will create a user.
Next, test the createDraft
resolver with the following mutation:
mutation {
createDraft(
authorEmail: "natalia@prisma.io"
title: "Deploying a GraphQL API to App Platform"
) {
id
title
content
published
author {
id
name
}
}
}
You can fetch the author
whenever the return value of a query is Post
. In this example, the Post.author
resolver will be called.
Close the server when finished testing.
Then commit your changes and push to deploy the API:
- git add .
- git commit -m "add user model"
- git push
It may take a few minutes for your updates to deploy.
You have successfully evolved your database schema with Prisma Migrate and exposed the new model in your GraphQL API.
In this article, you built a GraphQL API with Prisma and deployed it to DigitalOcean’s App Platform. You defined a GraphQL schema and resolvers with Apollo Server. You then used Prisma Client in your GraphQL resolvers to persist and query data in the PostgreSQL database. As a next step, you can extend the GraphQL API with a query to fetch individual users and a mutation to connect an existing draft to a user.
If you’re interested in exploring the data in the database, check out Prisma Studio. You can also visit the Prisma documentation to learn about different aspects of Prisma and explore some ready-to-run example projects in the prisma-examples
repository.
You can find the code for this project in the DigitalOcean Community repository.
]]>root@ShaliniSite:/home/nodeapp# sudo npm install -g pm2
▐ ╢░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░╟
WARN engine pm2@5.2.2: wanted: {"node":">=10.0.0"} (current: {"node":"8.10.0","npm":"3.5.2"})
npm ERR! Linux 4.15.0-189-generic
npm ERR! argv "/usr/bin/node" "/usr/bin/npm" "install" "-g" "pm2"
npm ERR! node v8.10.0
npm ERR! npm v3.5.2
npm ERR! code EMISSINGARG
npm ERR! typeerror Error: Missing required argument #1
npm ERR! typeerror at andLogAndFinish (/usr/share/npm/lib/fetch-package-metadata.js:31:3)
npm ERR! typeerror at fetchPackageMetadata (/usr/share/npm/lib/fetch-package-metadata.js:51:22)
npm ERR! typeerror at resolveWithNewModule (/usr/share/npm/lib/install/deps.js:456:12)
npm ERR! typeerror at /usr/share/npm/lib/install/deps.js:457:7
npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
npm ERR! typeerror at /usr/share/npm/lib/fetch-package-metadata.js:37:12
npm ERR! typeerror at addRequestedAndFinish (/usr/share/npm/lib/fetch-package-metadata.js:82:5)
npm ERR! typeerror at returnAndAddMetadata (/usr/share/npm/lib/fetch-package-metadata.js:117:7)
npm ERR! typeerror at pickVersionFromRegistryDocument (/usr/share/npm/lib/fetch-package-metadata.js:134:20)
npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
npm ERR! typeerror This is an error with npm itself. Please report this error at:
npm ERR! typeerror <http://github.com/npm/npm/issues>
WARN engine pidusage@3.0.2: wanted: {"node":">=10"} (current: {"node":"8.10.0","npm":"3.5.2"})
WARN engine mkdirp@1.0.4: wanted: {"node":">=10"} (current: {"node":"8.10.0","npm":"3.5.2"})
WARN engine semver@7.3.8: wanted: {"node":">=10"} (current: {"node":"8.10.0","npm":"3.5.2"})
npm ERR! Please include the following file with any support request:
npm ERR! /home/nodeapp/npm-debug.log
]]>Is there any way to trigger another functions within a function?
For now the only way i found is to trigger it with another http request.
]]>according to nuxt doc we just need to add the port to .env file https://nuxt.com/docs/getting-started/deployment#configuring-defaults-at-runtime
I put those to my env but not working
#NITRO_PORT = "3001"
PORT = "3001"
how I build the nuxt 3
first I go to nuxt project directory
run npm run build
them pm2 start
after I check
netstat -plant | grep 3001
does not return anything
but netstat -plant | grep 3000
did has return
0
0
:::3000
:::*
LISTEN
711987/PM2 v5.2.2:
btw this is my ecosystem.config.js
module.exports = {
apps: [
{
name: 'nuxt-php-framework',
exec_mode: 'cluster',
instances: 'max',
script: './.output/server/index.mjs'
}
]
}
this is my package .json
{
"name": "nuxt-php-framework",
"scripts": {
"build": "nuxt build",
"dev": "nuxt dev",
"generate": "nuxt generate",
"preview": "nuxt preview",
"postinstall": "nuxt prepare"
},
"dependencies": {
"axios": "^1.2.1",
"nuxt": "^3.0.0"
},
"devDependencies": {
"@nuxt/postcss8": "^1.1.3",
"autoprefixer": "^10.4.13",
"postcss": "^8.4.20",
"tailwindcss": "^3.2.4"
}
}
thanks
]]>npx quasar build
(build command) followed by npx quasar serve dist/spa -p 9000 --history
(run command; the Quasar serve command is part of the Quasar CLI package). When I set the commands in DigitalOcean, my deployment fails with the following:
[2022-12-22 10:39:45] Browser target......... es2019|edge88|firefox78|chrome87|safari13.1
[2022-12-22 10:39:45] =======================
[2022-12-22 10:39:45] Output folder.......... /workspace/dist/spa
[2022-12-22 10:39:45]
[2022-12-22 10:39:45] Tip: Built files are meant to be served over an HTTP server
[2022-12-22 10:39:45] Opening index.html over file:// won't work
[2022-12-22 10:39:45]
[2022-12-22 10:39:45] Tip: You can use "$ quasar serve" command to create a web server,
[2022-12-22 10:39:45] both for testing or production. Type "$ quasar serve -h" for
[2022-12-22 10:39:45] parameters. Also, an npm script (usually named "start") can
[2022-12-22 10:39:45] be added for deployment environments.
[2022-12-22 10:39:45] If you're using Vue Router "history" mode, don't forget to
[2022-12-22 10:39:45] specify the "--history" parameter: "$ quasar serve --history"
[2022-12-22 10:39:45]
[2022-12-22 10:39:47] App • Looking for Quasar App Extension "serve" command "dist/spa"
[2022-12-22 10:39:47] App • ⚠️ Quasar App Extension "serve" is missing...
I’ve double checked, and the Quasar CLI is in fact in my package.json. Why is this not working?
]]>Strapi is an opensource, headless Content Management System (CMS), built with the JavaScript programming language. Like other headless CMS’, Strapi doesn’t come with a frontend out of the box. Instead, it relies on an API that allows you to architect your content structure. Additionally, Strapi offers a variety of ways to build out your website, integrating with popular frameworks like React and Next.js. Furthermore, you can choose how to consume an API using either a REST API or GraphQL.
In this tutorial, you will install Strapi and set up a production environment to begin creating content. While Strapi runs SQLite in its development mode, you will configure it to use PostgreSQL. You’ll also serve your Strapi application behind an Nginx reverse proxy, and use the PM2 process manager to ensure stable uptime. Finally, you will secure your Nginx connection using Let’s Encrypt.
To follow this tutorial, you will need:
http://localhost:1337
as the app_server_address
.With Node.js version 16, Nginx, and Postgres installed on your server, you’re ready to proceed with the tutorial.
A database is required for any Strapi project. Currently, it supports MySQL, MariaDB, SQlite, and PostgreSQL. You can check the minimum version requirements within their official documentation. Furthermore, Strapi expects a fresh database. This means you can’t use an existing database to link to for your Strapi instance.
First, create a database:
- sudo -i -u postgres createdb strapi-db
Then create a user for your database:
- sudo -i -u postgres createuser --interactive
OutputEnter name of role to add: sammy
Shall the new role be a superuser? (y/n) y
By default, in PostgreSQL, you authenticate as database users using the Identification Protocol or ident authentication method. This involves PostgreSQL taking the client’s Ubuntu username and using it as the database username. This allows for greater security in many cases, but it can also cause issues when you’d like an outside program, such as Strapi, to connect to one of your databases. To resolve this, set a password for this PostgreSQL role to allow Strapi to connect to your database.
From your terminal, open the PostgreSQL prompt:
- sudo -u postgres psql
From the PostgreSQL prompt, update the user profile to have a strong password of your choosing:
- ALTER USER sammy PASSWORD 'postgres_password';
Exit out of your PostgreSQL user by entering \q
in your terminal:
- \q
With your database and user credentials created, you’re ready to install Strapi.
To install Strapi on your server, enter the following command:
- npx create-strapi-app@latest my-project
Confirm with y
to proceed with the installation.
After confirming yes, you’ll access an interactive installation. Choose the following options while making sure your Database name
, Username
, and Password
are changed appropriately:
Output? Choose your installation type Custom (manual settings)
? Choose your preferred language JavaScript
? Choose your default database client postgres
? Database name: strapi-db
? Host: 127.0.0.1
? Port: 5432
? Username: sammy
? Password: postgres_password
? Enable SSL connection: No
The SSL connection is not enabled because it will be configured and obtained with a Let’s Encrypt certificate later in the tutorial. Strapi will begin installation after you make your selections.
Once the installation is complete, you’re ready to build your Strapi project.
First, make sure that you’re in the my-project
directory:
- cd my-project
Next, run the following command:
- NODE_ENV=production npm run build
Output> my-project@0.1.0 build
> strapi build
Building your admin UI with production configuration...
✔ Webpack
Compiled successfully in 35.44s
Admin UI built successfully
This command will build your Strapi project, including the Strapi admin UI.
You can now test your Strapi server. Run the following command to start your Strapi server directly:
- node /home/sammy/my-project/node_modules/.bin/strapi start
Output[2022-11-21 13:54:24.671] info: The Users & Permissions plugin automatically generated a jwt secret and stored it in .env under the name JWT_SECRET.
Project information
┌────────────────────┬──────────────────────────────────────────────────┐
│ Time │ Mon Nov 21 2022 13:54:24 GMT+0000 (Coordinated … │
│ Launched in │ 1603 ms │
│ Environment │ development │
│ Process PID │ 4743 │
│ Version │ 4.5.4 (node v16.18.1) │
│ Edition │ Community │
└────────────────────┴──────────────────────────────────────────────────┘
Actions available
One more thing...
Create your first administrator 💻 by going to the administration panel at:
┌─────────────────────────────┐
│ http://localhost:1337/admin │
└─────────────────────────────┘
If you followed the prerequisites, you set up Nginx as a reverse proxy to Strapi’s default address of http://localhost:1337
. Navigate to http://your_domain in your browser to view the default Strapi landing page:
This command with the default configuration is currently using Strapi’s development mode. It also relies on a process that is tied to a command in your terminal and isn’t suitable for production. In the next step, you will add production settings to a process manager called PM2.
Exit your server by pressing CTRL+c
.
With Strapi installed, you’re ready to set up PM2 to run your server in the background as a service.
Instead of starting the server manually, you can rely on PM2 to manage this process. For more details about PM2 and configuring a production ready Node.js application, review our guide. PM2 helps to keep your server up and running without having to start it up manually, ensuring uptime.
First, make sure you’re in the top directory:
- cd ~
Next, install PM2 with the following command:
- sudo npm install pm2@latest -g
Then, create a configuration file for PM2 with your preferred text editor. nano
is used in this example:
- sudo nano ecosystem.config.js
Add the following content to this file, while making sure to change the project directory name and path, along with your database name, user, and password:
module.exports = {
apps: [
{
name: 'strapi',
cwd: '/home/sammy/my-project',
script: 'npm',
args: 'start',
env: {
NODE_ENV: 'production',
DATABASE_HOST: 'localhost',
DATABASE_PORT: '5432',
DATABASE_NAME: 'strapi-db',
DATABASE_USERNAME: 'sammy',
DATABASE_PASSWORD: 'postgres_password',
},
},
],
};
After editing the PM2 configuration, exit the file. If you’re using nano
, press CTRL+x
, then y
, and ENTER
.
Run your Strapi instance in the background with the following command:
- pm2 start ecosystem.config.js
Output[PM2][WARN] Applications strapi not running, starting...
[PM2] App [strapi] launched (1 instances)
┌─────┬───────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼───────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ strapi │ default │ N/A │ fork │ 22608 │ 0s │ 0 │ online │ 0% │ 30.3mb │ sammy │ disabled │
└─────┴───────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Applications that are running under PM2 restart automatically if the application crashes or is killed. You can launch your Strapi instance at startup by running the following subcommand:
- pm2 startup
Output[PM2] Init System found: systemd
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
This generates and configures a startup script to launch PM2 and its managed processes when the server boots.
Next, copy and execute the command given to you in the output, using your username in place of sammy
:
- sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Then, save the PM2 process list:
- pm2 save
You now have the PM2 service running on your server. If you navigate back to http://your_domain, notice that Strapi is now running in production mode:
With PM2 running your server in the background, you can finish up securing your Strapi instance.
As you may have noticed when you navigated to your domain to view the Strapi landing page, the URL, as indicated with http:// instead of https://, is an unsecure connection.
Secure your Strapi instance with Let’s Encrypt by entering the following command:
- sudo snap install --classic certbot
Link the certbot
command from the snap
install directory to your path so you can run it by writing certbot
:
- sudo ln -s /snap/bin/certbot /usr/bin/certbot
Next, allow HTTPS traffic and the Nginx Full
profile:
- sudo ufw allow 'Nginx Full'
Delete the redundant Nginx HTTP
profile allowance:
- sudo ufw delete allow 'Nginx HTTP'
Then use the Nginx plugin to obtain the certificate by inserting your domain address:
- sudo certbot --nginx -d your_domain -d www.your_domain
When running the command, you are prompted to enter an email address and agree to the terms of service. You can opt in or out of an email list as well. After doing so, you are greeted with a message telling you the process was successful and where your certificates are stored:
Output. . .
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/your_domain/fullchain.pem
Key is saved at: /etc/letsencrypt/live/your_domain/privkey.pem
This certificate expires on 2023-02-05.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.
Deploying certificate
Successfully deployed certificate for your_domain to /etc/nginx/sites-enabled/your_domain
Successfully deployed certificate for www.your_domain /etc/nginx/sites-enabled/your_domain
Congratulations! You have successfully enabled HTTPS on https://your_domain and https://www.your_domain
. . .
Navigate to http://your_domain. You are automatically redirected to an HTTPS version of your site. Notice too, that Strapi is running in production mode:
You can now navigate to https://your_domain/admin to create your Strapi administrator account:
After entering in your new credentials, you can enter the administrative dashboard:
From the dashboard, you can start creating content on Strapi.
In this tutorial, you set up a production environment for Strapi using a PostgreSQL database. You also served your Strapi application behind an Nginx reverse proxy and used the PM2 process manager to keep your server up and running.
After setting up your Strapi server, you can start creating content using the Strapi administrative dashboard. You can learn more about setting up and configuring your Strapi application from Strapi’s official documentation.
]]>Kubernetes is a powerful container orchestration tool that allows you to deploy and manage containerized applications, but it can sometimes take time to manage the underlying infrastructure. The serverless paradigm helps users deploy applications without having to worry about the underlying infrastructure. With the advent of Serverless 2.0, many platforms and tools now allow you to deploy serverless applications on Kubernetes.
Knative is a Kubernetes-based platform that provides components to deploy and manage serverless workloads. Knative offers open-source Kubernetes integration, cloud-agnosticism, building blocks, and extensibility. Tools like Red Hat’s Openshift also use Knative for users to deploy their serverless workloads on top of Kubernetes.
Knative features two main components: Eventing and Serving. Eventing manages events that trigger serverless workloads. Serving is a set of components to deploy and manage serverless workloads. Knative Serving enables developers to deploy and manage serverless applications on top of Kubernetes. With Knative Serving, developers can quickly and easily deploy new services, scale them up and down, and connect them to other services and event sources. This feature enables developers to build and deploy modern, cloud-native applications that are flexible, scalable, and easy to maintain.
In this tutorial, you will use Knative Serving to deploy a Node.js application as a serverless workload on a DigitalOcean Kubernetes cluster. You will use doctl
(the DigitalOcean CLI) and kn
(the Knative CLI) to create the Kubernetes cluster and deploy the application.
To complete this tutorial, you will need the following:
doctl
, installed on your machine. The GitHub Download option is recommended. See How To Use doctl for more information on using doctl
.kubectl
installed on your machine, which you can set up with the Kubernetes installation docs.Since Knative is a Kubernetes-based platform, you will use it with a Kubernetes cluster on DigitalOcean. There are multiple ways to launch a Kubernetes cluster on DigitalOcean. You can use the DigitalOcean Cloud interface, the DigitalOcean CLI, or the Terraform provider.
In this tutorial, you will use doctl
, the DigitalOcean command-line client, to launch the Kubernetes cluster. If you have not yet installed doctl
, follow the steps in the official installation guide.
To effectively use Knative in this tutorial, you will need a Kubernetes cluster with at least 4GB RAM and 2 CPU cores available. You can launch a cluster named knative-tutorial
with these specifications by running the doctl
command with the following flags:
--size
specifies the size of the remote server.--count
specifies the number of nodes that will be created as part of the cluster.To create the DigitalOcean Kubernetes cluster, run the following command:
- doctl kubernetes cluster create knative-tutorial --size s-2vcpu-4gb --count 3
In this command, you create a cluster named knative-tutorial
with the size
set to 4GB RAM and 2 CPU cores and with a count
of 3
nodes.
Note: You can also use the --region
flag to specify in which region the server will be located, though that option is not used in this tutorial. If you are using a remote server, such as a Droplet, you might wish to have your cluster in the same region as the server. If you use the DigitalOcean Control Panel to create your cluster, you can choose a datacenter region and VPC network. For more on DigitalOcean’s regional availability, you can refer to our Regional Availability Matrix.
This command will take a few minutes to complete. Once it finishes, you will receive a message similar to the following:
OutputNotice: Cluster is provisioning, waiting for cluster to be running
...........................................................
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/home/sammy/.kube/config"
Notice: Setting current-context to do-nyc1-knative-tutorial
ID Name Region Version Auto Upgrade Status Node Pools
d2d1f9bc-114b-45e7-b109-104137f9ab62 knative-tutorial nyc1 1.24.4-do.0 false running knative-tutorial-default-pool
The cluster is now ready to use.
You can now verify if kubectl
has been set up in your system and can reach the DigitalOcean Kubernetes Cluster with the following command:
- kubectl cluster-info
You should receive a similar output:
OutputKubernetes control plane is running at https://69de217e-0284-4e18-a6d7-5606915a4e88.k8s.ondigitalocean.com
CoreDNS is running at https://69de217e-0284-4e18-a6d7-5606915a4e88.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Since the output lists the URLs of the control plane
and the CoreDNS
service (highlighted in the output block), you know that kubectl
is configured correctly in your system and can reach the cluster.
As part of the cluster creation process, doctl
automatically configures the kubectl
context to use the new cluster. You can verify this by running the following command:
- kubectl config current-context
This command will return the name of the current context.
You should receive the following output:
Outputdo-nyc1-knative-tutorial
The output indicates that the current context is do-nyc1-knative-tutorial
, which is the name of the cluster you created with the region in which it is located (nyc1
).
You can also verify that the cluster is running and the nodes are ready to accept workloads with this command:
- kubectl get nodes
The kubectl get nodes
command lists all the nodes in the cluster along with their status and other information.
You should receive the following output:
OutputNAME STATUS ROLES AGE VERSION
do-nyc1-knative-tutorial-159783000-0v9k5 Ready <none> 2m52s v1.24.4
do-nyc1-knative-tutorial-159783000-1h4qj Ready <none> 2m52s v1.24.4
do-nyc1-knative-tutorial-159783000-1q9qf Ready <none> 2m52s v1.24.4
The output states that the cluster has three nodes ready to accept workloads.
In this step, you launched a Kubernetes cluster on DigitalOcean. You can now install Knative to deploy your serverless workload on Kubernetes.
In this step, you will install Knative Serving on your Kubernetes cluster. Knative Serving is responsible for deploying and managing your serverless workloads.
To install Knative Serving, you will need the Knative core components and custom resources. Run these commands to install the core components:
- kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.0/serving-crds.yaml
- kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.0/serving-core.yaml
The kubectl apply
command installs the Knative core components and custom resources on your cluster. The -f
flag specifies a file containing configuration changes. In this case, the configuration changes are in the YAML files you download from the Knative repository.
These commands will take a few minutes to complete. You will receive the following output (the output below is truncated for brevity):
Outputcustomresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusteringresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
...
This output indicates that the Knative core components and custom resources are on your cluster. All of the components are in the knative-serving
namespace.
Once they have finished downloading, you can verify that Knative Serving is installed:
- kubectl get pods --namespace knative-serving
The kubectl get pods
command will retrieve a list of all the pods launched in the cluster in the namespace knative-serving
. This command identifies the pods in your cluster, their current status, the number of containers in the pod, and the names of the containers in a particular namespace.
You should receive a similar output:
OutputNAME READY STATUS RESTARTS AGE
activator-5f6b4bf5c8-kfxrv 1/1 Running 0 4m37s
autoscaler-bc7d6c9c9-v5jqt 1/1 Running 0 4m34s
controller-687d88ff56-9g4gz 1/1 Running 0 4m32s
domain-mapping-69cc86d8d5-kr57g 1/1 Running 0 4m29s
domainmapping-webhook-65dfdd9b96-nzs9c 1/1 Running 0 4m27s
net-kourier-controller-55c99987b4-hkfpl 1/1 Running 0 3m49s
webhook-587cdd8dd7-qbb9b 1/1 Running 0 4m22s
The output displays all the pods running in the knative-serving
namespace. The pods are responsible for the different components of Knative Serving.
Knative requires a networking layer to route incoming traffic to your services. The networking layer in Knative enables the deployment and communication of microservices in a distributed environment. Knative Serving supports Istio, Contour, and Kourier as the networking layer.
In this tutorial, you will use Kourier as the networking layer because it integrates seamlessly with Knative. Kourier uses the same APIs and standards as the rest of the Knative ecosystem, making it a good option for developers and organizations already using Knative who want to benefit from its powerful networking capabilities.
Install Kourier with this command:
- kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.8.0/kourier.yaml
You should receive the following output:
Outputnamespace/kourier-system configured
configmap/kourier-bootstrap configured
configmap/config-kourier configured
serviceaccount/net-kourier configured
clusterrole.rbac.authorization.k8s.io/net-kourier configured
clusterrolebinding.rbac.authorization.k8s.io/net-kourier configured
deployment.apps/net-kourier-controller configured
service/net-kourier-controller configured
deployment.apps/3scale-kourier-gateway configured
service/kourier configured
service/kourier-internal configured
The output lists all the resources, like Namespaces
and ConfigMaps
, created as part of the Kourier installation process in the Kubernetes cluster.
To configure Knative to use Kourier as the networking layer, you will edit the config-network
ConfigMap.
For this, you need to use the kubectl patch
command to update the fields of an object in a Kubernetes cluster. You will have to also include some flags with this command:
--namespace
specifies where you can find the object you want to patch.--type
specifies which patch to perform when applying configs with the patch
command. The available types are json
, merge
, and strategic
.--patch
specifies the patch data directly on the command line rather than in a file.Run this command with the associated flags:
- kubectl patch configmap/config-network \
- --namespace knative-serving \
- --type merge \
- --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
The kubectl patch
command patches configmap/config-network
with the namespace
set to knative-serving
and the type
set to merge
.
The merge
patch type allows for more targeted updates, while the json
or strategic
patch types allow for more comprehensive updates. The merge
patch type specifies individual fields to update without including the entire resource configuration in the patch command, which is the case for the other types. The data being patched is identified with the patch
flag.
You should receive the following output:
Outputconfigmap/config-network patched
The output of this command ensures Kourier is properly set up in the cluster.
Finally, fetch the external IP address of the Kourier load balancer with the following command:
- kubectl get svc kourier --namespace kourier-system
The kubectl get svc
command retrieves information about the services running in a Kubernetes cluster in the mentioned namespace (in this case, kourier-system
). This command will list all the services in the cluster with their associated IP addresses and port numbers.
You should receive the following output:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kourier LoadBalancer 10.245.186.153 138.197.231.61 80:31245/TCP,443:30762/TCP 2m33s
The output of the command includes the name of the service, the type of service (such as ClusterIP
, NodePort
, and so on), the cluster IP address and port number, and the external IP address and port number. The numbers listed here are examples, so your output will display different numbers.
It may take a few minutes for the load balancer to be provisioned. You may see an empty value or <pending>
for the EXTERNAL-IP
field. If that is the case, wait a few minutes and rerun the command.
Note: You will need the load balancer to be provisioned before continuing in this tutorial. Once the EXTERNAL-IP
field for the LoadBalancer
is populated, you can continue. Otherwise, you may experience issues while setting up the DNS service.
You can also configure DNS for your domain name to point to the external IP address of the load balancer. Knative provides a Kubernetes job called default-domain
that will automatically configure Knative Serving to use sslip.io
as the default DNS suffix.
sslip.io
is a dynamic DNS (Domain Name System) service that allows users to access their devices using a custom domain name instead of the IP address. Using sslip.io
improves user access to their devices remotely without needing to remember complex IP addresses.
To configure the default DNS suffix, you will need to run the following command:
- kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.0/serving-default-domain.yaml
You will receive the following output:
Outputjob.batch/default-domain configured
service/default-domain-service configured
The resources required to run the Magic DNS service have been configured successfully.
Note: You can also add a domain if you prefer, though that is beyond the scope of this article. You need to set up a DNS provider (such as Cloud DNS or Route53) and create an A record for the Knative ingress gateway that is mapped to the IP address of your Knative cluster. You would then update the Knative ingress gateway configuration to use the DNS zone and A record you created. You can test the DNS configuration by accessing the Knative serving domain and ensuring it resolves to the ingress gateway.
You have now successfully installed Knative Serving on your Kubernetes cluster. You can now deploy the Serverless workload with Knative Serving on your Kubernetes cluster.
In this step, you will deploy a serverless workload on top of Knative, which is currently running in your Kubernetes cluster. You will use the Node.js application that you created as part of the prerequisites.
Before you proceed, you will create a new namespace
to deploy your serverless workload. You can do this by running the following command:
- kubectl create namespace serverless-workload
This command will create a new namespace called serverless-workload
.
You should receive the following output:
Outputnamespace/serverless-workload configured
The output ensures that the namespace has been created successfully.
Knative Serving uses a custom resource called Service
to deploy and manage your serverless workloads. The Service
resource is defined by the Knative Serving API.
Once you create or modify the Service
resource, Knative Serving will automatically create a new Revision
. A Revision
is a point-in-time snapshot of your workload.
Whenever a new Revision
is created, traffic will be routed to the new Revision
by a Route
. Knative Serving automatically creates a Route
for each Service. You can access your workload using the domain name from the Route
.
To deploy a serverless workload on Knative, you must create a Service
resource. You can achieve this in two different ways:
kn
, the official Knative CLI tool.kubectl
command line tool for applying YAML files to your Kubernetes cluster.In the subsections that follow, you will use each of these methods.
The Knative CLI, kn
, is a command-line interface that allows you to interact with Knative.
First, install kn
by downloading the latest version of the Knative CLI binary:
- wget https://github.com/knative/client/releases/download/knative-v1.8.1/kn-linux-amd64
The wget
command will retrieve the tool.
Then, move the binary to a folder called kn
:
- mv kn-linux-amd64 kn
Next, make it executable with the following command:
- chmod +x kn
Finally, move the executable binary file to a directory on your PATH
:
- cp kn /usr/local/bin/
Verify that kn
is installed:
- kn version
You should receive a similar output:
OutputVersion: v1.8.1
Build Date: 2022-10-20 16:09:37
Git Revision: 1db36698
Supported APIs:
* Serving
- serving.knative.dev/v1 (knative-serving v1.8.0)
* Eventing
- sources.knative.dev/v1 (knative-eventing v1.8.0)
- eventing.knative.dev/v1 (knative-eventing v1.8.0)
The output of this command states that kn
was installed.
To deploy the Node.js application using kn
, you will use the kn service create
command. You will include some flags with this command:
--image
specifies the image of the container you want to deploy.--port
specifies the port that your application listens on.--name
specifies the name of the Service
you want to create.--namespace
specifies the namespace to which you want to deploy the workload.To deploy your Node.js application, run the following command and update the highlighted portion with your DockerHub username:
- kn service create node-service \
- --image your_dockerhub_username/nodejs-image-demo \
- --port 8080 \
- --namespace serverless-workload
The kn service
command creates the Knative Service named node-service
with the port
set to 8080
and the namespace
flag set to serverless-workload
. The image
flag indicates the location of the application container uploaded in Dockerhub.
You should receive the following output:
OutputCreating service 'node-service' in namespace 'serverless-workload':
0.236s Configuration "node-service" is waiting for a Revision to become ready.
2.230s ...
2.311s Ingress has not yet been reconciled.
2.456s Waiting for load balancer to be ready
2.575s Ready to serve.
Service 'node-service' created to latest revision 'node-service-00001' is available at URL:
http://node-service.serverless-workload.138.197.231.61.sslip.io
This output provides the status of the Knative Service
creation. Once the Service
gets created, you will find the URL of the Route
linked to the Service
.
Run the following command to verify that the Service
resource was created:
- kn service list --namespace serverless-workload
The kn service list
command lists all the services currently deployed with Knative Serving in a particular namespace in the Kubernetes cluster. This command enables you to access details about each Service, including its name, status, and URL.
You should receive a similar output:
OutputNAME URL LATEST AGE CONDITIONS READY REASON
node-service http://node-service.serverless-workload.138.197.231.61.sslip.io node-service-00001 88s 3 OK / 3 True
From this output, you can verify that a new Knative Service
has been created by the kn service
command you executed earlier. You also find the URL for the Route
and the Age
and Status
of the Service.
In this section, you installed the Knative CLI and used it to deploy a serverless workload for your Node.js app.
You can also deploy a Service
resource by creating a YAML file that defines the resource. This method is useful to ensure version control of your workloads.
First, you will create a subdirectory in the directory containing your Dockerfile
. This tutorial uses the name knative
for the subdirectory. Create the folder:
- mkdir knative
Next, you will create a YAML file called service.yaml
in the knative
directory:
- nano knative/service.yaml
In the newly created service.yaml
file, add the following lines to define a Service
that will deploy your Node.js app:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: node-yaml-service
namespace: serverless-workload
spec:
template:
metadata:
name: node-yaml-service-v1
spec:
containers:
- image: docker.io/your_dockerhub_username/nodejs-image-demo
ports:
- containerPort: 8080
The Knative Service YAML file specifies the following information:
name
in the first metadata
section specifies the name
of the Service
resource.namespace
specifies the namespace
to which you want to deploy the workload.name
in the first spec
section specifies the name
of the Revision
.image
in the second spec
section specifies the image
of the container you want to deploy.containerPort
specifies the port
your application listens on.Be sure to update the highlighted text with the information you’ve selected for your app and system, as well as your DockerHub username.
Save and close the file.
You can now deploy the Service
resource by running the following command:
- kubectl apply -f knative/service.yaml
You should receive the following output:
Outputservice.serving.knative.dev/node-yaml-service created
The output indicates that the Service
resource has been created successfully.
Run the following command to verify that the Service
resource has been created:
- kn service list --namespace serverless-workload
The kn service list
command lists all the services currently deployed with Knative Serving in a particular namespace. This command enables you to access details about each Service.
You should receive the following output:
OutputNAME URL LATEST AGE CONDITIONS READY REASON
node-service http://node-service.serverless-workload.174.138.127.211.sslip.io node-service-00001 9d 3 OK / 3 True
node-yaml-service http://node-yaml-service.serverless-workload.174.138.127.211.sslip.io node-yaml-service-v1 9d 3 OK / 3 True
You can verify that a new Knative Service
has been created based on the Knative Service YAML file that you executed previously. You created the node-yaml-service
in this section. You can also find the URL of the Knative Route
.
You used a YAML file in this section to create a serverless workload for your Node.js app.
In this step, you created the Knative Service
resource using both the kn
CLI tool and a YAML file. Next, you will access the application workload you deployed with Knative.
Now that you have deployed the serverless workload, you can access it with the URL from the Knative Route
created as part of the Serverless workload. The Knative Route defines how incoming HTTP traffic should route to a specific service or application.
To get the list of all Knative routes, run the following command:
- kn route list --namespace serverless-workload
The kn route list
command lists all the Knative routes at a particular namespace in the Kubernetes cluster.
You should receive a similar output:
OutputNAME URL READY
node-service http://node-service.serverless-workload.138.197.231.61.sslip.io True
node-yaml-service http://node-yaml-service.serverless-workload.138.197.231.61.sslip.io True
You will use the URLs generated for the Knative Routes to confirm that everything is working as expected.
Open either of the URLs in your browser. When you access the site in your browser, the landing page from your Node app will load:
You have successfully deployed a serverless workload using Knative on your Kubernetes cluster.
In this tutorial, you deployed a serverless workload using Knative. You created a Knative Service
resource using the kn
CLI tool and via YAML files. This resource deployed a Node.js application on your Kubernetes cluster, which you accessed using the Route
URL.
For more about the features that Knative offers, like Autoscaling of pods, gradual rollout of traffic to revision, and the Eventing component, visit the official Knative documentation.
To continue building with DigitalOcean Kubernetes (DOKS), refer to our Kubernetes How-To documentation. You can also learn more about DOKS, such as features and availability. For DOKS troubleshooting, you can refer to our Kubernetes Support guides.
]]>Ho to fix it?
]]>//npm.pkg.github.com/:_authToken=${TOKEN_FOR_GITHUB}
but if I use .npmrc with a yarn.lock in the project it errors with,
‘Warning: a .npmrc file was found. yarn does not read .npmrc files, use .yarnrc instead if needed.’
So I tried making a .yarnrc file and that just got me a 401 response, and I also tried a .yarnrc.yml and that also got me a 401.
I cant see another question with this answered and my project depends on being able to install this package
]]>[2022-12-05 11:33:07] › configuring custom build command to be run at the end of the build:
[2022-12-05 11:33:07] │ npm run build
[2022-12-05 11:33:07]
[2022-12-05 11:33:07] ╭──────────── buildpack detection ───────────╼
[2022-12-05 11:33:08] │ Detected the following buildpacks suitable to build your app:
[2022-12-05 11:33:08] │
[2022-12-05 11:33:08] │ heroku/nodejs-engine v0.5.1
[2022-12-05 11:33:08] │ digitalocean/node v0.3.4 (Node.js)
[2022-12-05 11:33:08] │ digitalocean/procfile v0.0.3 (Procfile)
[2022-12-05 11:33:08] │ digitalocean/custom v0.1.1 (Custom Build Command)
[2022-12-05 11:33:08] │
[2022-12-05 11:33:08] │ For documentation on the buildpacks used to build your app, please see:
[2022-12-05 11:33:08] │
[2022-12-05 11:33:08] │ Node.js v0.3.4 https://do.co/apps-buildpack-node
[2022-12-05 11:33:08] ╰─────────────────────────────────────────────╼
[2022-12-05 11:33:08]
[2022-12-05 11:33:08] ╭──────────── app build ───────────╼
[2022-12-05 11:33:09] │ ---> Node.js Buildpack
[2022-12-05 11:33:09] │ ---> Installing toolbox
[2022-12-05 11:33:09] │ ---> - jq
[2022-12-05 11:33:09] │ ---> - yj
[2022-12-05 11:33:09] │ ---> Getting Node version
[2022-12-05 11:33:09] │ ---> Resolving Node version
[2022-12-05 11:33:10] │ ERROR: failed to build: exit status 1
[2022-12-05 11:33:11] │
[2022-12-05 11:33:11] │
[2022-12-05 11:33:11] │ For documentation on the buildpacks used to build your app, please see:
[2022-12-05 11:33:11] │
[2022-12-05 11:33:11] │ Node.js v0.3.4 https://do.co/apps-buildpack-node
[2022-12-05 11:33:11] │
[2022-12-05 11:33:11] │ ✘ build failed
In the package.json file I have following declaration:
"engines": {
"node": "^18.6.0",
"npm": "^8.14.0"
},
The code work when I run the commands locally and also work properly on heroku server. I can’t figure out why it fails here.
Any idea anyone?
]]>Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, macOS, FreeBSD, and Windows. Though you can run Node.js applications at the command line, this tutorial will focus on running them as a service. This means that they will restart on reboot or failure and are safe for use in a production environment.
In this tutorial, you will set up a production-ready Node.js environment on a single Rocky Linux 9 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS using a free certificate provided by Let’s Encrypt.
This guide assumes that you have the following:
When you’ve completed the prerequisites, you will have a server serving your domain’s default placeholder page at https://example.com/
.
Let’s write a Hello World application that returns “Hello World” to any HTTP requests. This sample application will help you get up and running with Node.js. You can replace it with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.
The default text editor that comes with Rocky Linux 9 is vi
. vi
is an extremely powerful text editor, but it can be somewhat obtuse for users who lack experience with it. You might want to install a more user-friendly editor such as nano
to facilitate editing configuration files on your Rocky Linux 9 server:
- sudo dnf install nano
Now, using nano
or your favorite text editor, create a sample application called hello.js
:
- nano hello.js
Insert the following code into the file:
const http = require('http');
const hostname = 'localhost';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
Save the file and exit the editor. If you are using nano
, press Ctrl+X
, then when prompted, Y
and then Enter.
This Node.js application listens on the specified address (localhost
) and port (3000
), and returns “Hello World!” with a 200
HTTP success code. Since we’re listening on localhost
, remote clients won’t be able to connect to our application.
To test your application, type:
- node hello.js
You will receive the following output:
OutputServer running at http://localhost:3000/
Note: Running a Node.js application in this manner will block additional commands until the application is killed by pressing CTRL+C
.
To test the application, open another terminal session on your server, and connect to localhost
with curl
:
- curl http://localhost:3000
If you get the following output, the application is working properly and listening on the correct address and port:
OutputHello World!
If you do not get the expected output, make sure that your Node.js application is running and configured to listen on the proper address and port.
Once you’re sure it’s working, kill the application (if you haven’t already) by pressing CTRL+C
.
Next let’s install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.
Use npm
to install the latest version of PM2 on your server:
- sudo npm install pm2@latest -g
The -g
option tells npm
to install the module globally, so that it’s available system-wide.
Let’s first use the pm2 start
command to run your application, hello.js
, in the background:
- pm2 start hello.js
This also adds your application to PM2’s process list, which is outputted every time you start an application:
Output...
[PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/sammy/hello.js in fork_mode (1 instance)
[PM2] Done.
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ hello │ fork │ 0 │ online │ 0% │ 25.2mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
As indicated above, PM2 automatically assigns an App name
(based on the filename, without the .js
extension) and a PM2 id
. PM2 also maintains other information, such as the PID
of the process, its current status, and memory usage.
Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but we can take an additional step to get the application to launch on system startup using the startup
subcommand. This subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots:
- pm2 startup systemd
Output…
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/usr/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Copy and run the provided command (this is to avoid permissions issues with running Node.js tools as sudo
):
- sudo env PATH=$PATH:/usr/bin /usr/local/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Output…
[ 'systemctl enable pm2-sammy' ]
[PM2] Writing init configuration in /etc/systemd/system/pm2-sammy.service
[PM2] Making script booting at startup...
[PM2] [-] Executing: systemctl enable pm2-sammy...
Created symlink /etc/systemd/system/multi-user.target.wants/pm2-sammy.service → /etc/systemd/system/pm2-sammy.service.
[PM2] [v] Command successfully executed.
+---------------------------------------+
[PM2] Freeze a process list on reboot via:
$ pm2 save
[PM2] Remove init script via:
$ pm2 unstartup systemd
Now, you’ll need to make an edit to the system service that was just generated to make it compatible with Rocky Linux’s SELinux security system. Using nano
or your favorite text editor, open /etc/systemd/system/pm2-sammy.service
:
- sudo nano /etc/systemd/system/pm2-sammy.service
In the [Service]
block of the configuration file, replace the contents of the PIDFile
setting with /run/pm2.pid
as highlighted below, and add the other highlighted Environment
line:
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=sammy
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/home/sammy/.local/bin:/home/sammy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/home/sammy/.pm2
PIDFile=/run/pm2.pid
Restart=on-failure
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
ExecStart=/usr/local/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/local/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/local/lib/node_modules/pm2/bin/pm2 kill
[Install]
Save and close the file. You have now created a systemd unit that runs pm2
for your user on boot. This pm2
instance, in turn, runs hello.js
.
Start the service with systemctl
:
- sudo systemctl start pm2-sammy
Check the status of the systemd unit:
- systemctl status pm2-sammy
For a detailed overview of systemd, please review Systemd Essentials: Working with Services, Units, and the Journal.
In addition to those we have covered, PM2 provides many subcommands that allow you to manage or look up information about your applications.
Stop an application with this command (specify the PM2 App name
or id
):
- pm2 stop app_name_or_id
Restart an application:
- pm2 restart app_name_or_id
List the applications currently managed by PM2:
- pm2 list
Get information about a specific application using its App name
:
- pm2 info app_name
The PM2 process monitor can be pulled up with the monit
subcommand. This displays the application status, CPU, and memory usage:
- pm2 monit
Note that running pm2
without any arguments will also display a help page with example usage.
Now that your Node.js application is running and managed by PM2, let’s set up the reverse proxy.
Your application is running and listening on localhost
, but you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.
In the prerequisite tutorial, you set up your Nginx configuration in the /etc/nginx/conf.d/your_domain.conf
file. Open this file for editing:
- sudo nano /etc/nginx/conf.d/your_domain.conf
Within the server
block, you should have an existing location /
block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number:
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
This configures the server to respond to requests at its root. Assuming our server is available at your_domain
, accessing https://your_domain/
via a web browser would send the request to hello.js
, listening on port 3000
at localhost
.
You can add additional location
blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 3001
, you could add this location block to allow access to it via https://your_domain/app2
:
server {
...
location /app2 {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
Once you are done adding the location blocks for your applications, save the file and exit your editor.
Make sure you didn’t introduce any syntax errors by typing:
- sudo nginx -t
Restart Nginx:
- sudo systemctl restart nginx
Assuming that your Node.js application is running, and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your server’s URL (its public IP address or domain name).
Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on a Rocky Linux 9 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share.
Next, you may want to look into How to build a Node.js application with Docker.
]]>e.g. “Request larger than allowed: 3546372 > 2097152 bytes.”
Wanna to ask if possible to increase the file upload limit.
]]>sudo apt install certbot python3-certbot-nginx nginx
command in web console for my basic droplet with 10gb disk. and it take forever to finish this and always disconnects web console and have to start again tried this more than 10 times.
sudo apt install certbot python3-certbot-nginx nginx
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
fontconfig-config fonts-dejavu-core libdeflate0 libfontconfig1 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libnginx-mod-http-geoip2 libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libnginx-mod-stream-geoip2
libtiff5 libwebp7 libxpm4 nginx-common nginx-core python3-acme python3-certbot python3-configargparse python3-icu python3-josepy python3-parsedatetime python3-requests-toolbelt python3-rfc3339 python3-zope.component python3-zope.event python3-zope.hookable
Suggested packages:
python-certbot-doc python3-certbot-apache libgd-tools fcgiwrap nginx-doc ssl-cert python-acme-doc python-certbot-nginx-doc
The following NEW packages will be installed:
certbot fontconfig-config fonts-dejavu-core libdeflate0 libfontconfig1 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libnginx-mod-http-geoip2 libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream
libnginx-mod-stream-geoip2 libtiff5 libwebp7 libxpm4 nginx nginx-common nginx-core python3-acme python3-certbot python3-certbot-nginx python3-configargparse python3-icu python3-josepy python3-parsedatetime python3-requests-toolbelt python3-rfc3339
python3-zope.component python3-zope.event python3-zope.hookable
0 upgraded, 33 newly installed, 0 to remove and 16 not upgraded.
Need to get 3683 kB of archives.
After this operation, 13.4 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Ign:1 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 fonts-dejavu-core all 2.37-2build1
Ign:2 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 fontconfig-config all 2.13.1-4.2ubuntu5
Ign:3 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 libdeflate0 amd64 1.10-2
Ign:4 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 libfontconfig1 amd64 2.13.1-4.2ubuntu5
Ign:5 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 libjpeg-turbo8 amd64 2.1.2-0ubuntu1
Ign:6 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 libjpeg8 amd64 8c-2ubuntu10
Ign:7 http://mirrors.digitalocean.com/ubuntu jammy-updates/main amd64 libjbig0 amd64 2.1-3.1ubuntu0.22.04.1
Ign:8 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 libwebp7 amd64 1.2.2-2
Ign:9 http://mirrors.digitalocean.com/ubuntu jammy-updates/main amd64 libtiff5 amd64 4.3.0-6ubuntu0.2
Ign:10 http://mirrors.digitalocean.com/ubuntu jammy/main amd64 libxpm4 amd64 1:3.5.12-1build2
0% [Working]
]]>i am trying yo setup my parse server using Heroku build pack as Digital Ocean App.
I want to run Dashboard also on the same instance which I believe can be done if i configure index.js for Parse-Dashboard. However, not able to make it work.
Out of https://lendabletest-bhevy.ondigitalocean.app/dashboard in browser is Cannot GET /dashboard my iOS app successfully accessing Parse server on Digital Ocean server.
Appreciate some help.
Below are parts of index.js, package.json and build log.
index.js for Parse Dashboard
var httpServerDash = require('http').createServer(dashApp);
httpServerDash.listen(4040, function() {
console.log('Parse Dashboard running on port 4040.');
});
package.json
{
"name": "my app",
"version": "1.4.0",
"description": "",
"main": "index.js",
"repository": {
"type": "git",
"url": "https://github.com/myaccnt/parse-server.git"
},
"license": "MIT",
"dependencies": {
"express": "4.18.1",
"parse-server": "5.4.0",
"underscore":"*",
"parse": "3.4.2",
"cryptoutils":"*",
"parse-server-api-mail-adapter": "^2.1.0",
"mailgun.js": "git+https://github.com/mailgun/mailgun.js.git",
"form-data": "^4.0.0",
"parse-dashboard":"5.0.0"
},
"scripts": {
"start": "node index.js"
},
"engines": {
"node": "18.1.0",
"npm": "8.8.0"
}
}
below is build log from Digital Ocean
› configuring build-time app environment variables:
MAILGUN_API_KEY DEBUG NODE_MODULES_CACHE FACEBOOK_SECRET DASHBOARD_APP_NAME NODE_VERBOSE APP_NAME MASTER_KEY TWITTER_CONSUMER_KEY GITHUB_TOKEN JAVASCRIPT_KEY SERVER_URL DATABASE_URI FACEBOOK_APP_ID APPLE_PUSH_TOPIC APP_ID APP_SUPPORT_EMAIL APPLE_PUSH_PASSPHRASE CLIENT_KEY PARSE_MOUNT VERBOSE IOS_BUNDLE_ID PARSE_SERVER_CLEANUP_INVALID_INSTALLATIONS VERBOSE_PARSE_SERVER_PUSH_ADAPTER TWITTER_CONSUMER_SECRET MAILGUN_DOMAIN MONGODB_URI
─────────── dockerfile build ───────────╼
› using dockerfile /.app_platform_workspace/Dockerfile
› using build context /.app_platform_workspace//
INFO[0000] Retrieving image manifest node:latest
INFO[0000] Retrieving image library/node:latest from registry mirror <registry-uri-0>
INFO[0000] Retrieving image manifest node:latest
INFO[0000] Returning cached image manifest
INFO[0000] Built cross stage deps: map[]
INFO[0000] Retrieving image manifest node:latest
INFO[0000] Returning cached image manifest
INFO[0000] Retrieving image manifest node:latest
INFO[0000] Returning cached image manifest
INFO[0000] Executing 0 build triggers
INFO[0000] Building stage 'node:latest' [idx: '0', base-idx: '-1']
INFO[0000] Checking for cached layer <registry-uri-1>
INFO[0001] Using caching version of cmd: RUN mkdir parse
INFO[0001] Using files from context: [/.app_platform_workspace]
INFO[0001] Checking for cached layer <registry-uri-2>
INFO[0001] No cached layer found for cmd RUN npm install
INFO[0001] Cmd: EXPOSE
INFO[0001] Adding exposed port: 1337/tcp
INFO[0001] Unpacking rootfs as cmd ADD . /parse requires it.
INFO[0029] Initializing snapshotter ...
INFO[0029] Taking snapshot of full filesystem...
INFO[0033] RUN mkdir parse
INFO[0033] Found cached layer, extracting to filesystem
INFO[0033] Using files from context: [/.app_platform_workspace]
INFO[0033] ADD . /parse
INFO[0033] Taking snapshot of files...
INFO[0033] WORKDIR /parse
INFO[0033] Cmd: workdir
INFO[0033] Changed working directory to /parse
INFO[0033] No files changed in this command, skipping snapshotting.
INFO[0033] RUN npm install
INFO[0034] Cmd: /bin/sh
INFO[0034] Args: [-c npm install]
INFO[0034] Running: [/bin/sh -c npm install]
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: 'myapp@1.4.0',
npm WARN EBADENGINE required: { node: '18.1.0', npm: '8.8.0' },
npm WARN EBADENGINE current: { node: 'v19.1.0', npm: '8.19.3' }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: 'parse-server@6.0.0-alpha.10',
npm WARN EBADENGINE required: { node: '>=14.21.0 <17 || >=18 <19' },
npm WARN EBADENGINE current: { node: 'v19.1.0', npm: '8.19.3' }
npm WARN EBADENGINE }
npm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated request@2.88.0: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated subscriptions-transport-ws@0.11.0: The `subscriptions-transport-ws` package is no longer maintained. We ql-ws` instead. For help migrating Apollo software to `graphql-ws`, see https://www.apollographql.com/docs/apollo-server/ching-from-subscriptions-transport-ws For general help using `graphql-ws`, see https://github.com/enisdenjo/graphql-ws/
npm WARN deprecated csurf@1.11.0: Please use another csrf package
npm WARN deprecated uuid@3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain known to be problematic. See https://v8.dev/blog/math-random for details.
added 642 packages, and audited 643 packages in 2m
51 packages are looking for funding
run `npm fund` for details
6 vulnerabilities (5 moderate, 1 high)
To address issues that do not require attention, run:
npm audit fix
To address all issues possible (including breaking changes), run:
npm audit fix --force
Some issues need review, and may require choosing
a different dependency.
Run `npm audit` for details.
npm notice
npm notice New major version of npm available! 8.19.3 -> 9.1.2
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.1.2>
npm notice Run `npm install -g npm@9.1.2` to update!
npm notice
INFO[0139] Taking snapshot of files...
INFO[0149] Pushing layer <registry-uri-3> to cache now
INFO[0149] Pushing image to <registry-uri-4>
INFO[0149] ENV APP_ID setYourAppId
INFO[0149] No files changed in this command, skipping snapshotting.
INFO[0149] ENV MASTER_KEY setYourMasterKey
INFO[0149] No files changed in this command, skipping snapshotting.
INFO[0149] ENV DATABASE_URI setMongoDBURI
INFO[0149] No files changed in this command, skipping snapshotting.
INFO[0149] EXPOSE 1337
INFO[0149] Cmd: EXPOSE
INFO[0149] Adding exposed port: 1337/tcp
INFO[0149] No files changed in this command, skipping snapshotting.
INFO[0149] CMD [ "npm", "start" ]
INFO[0149] No files changed in this command, skipping snapshotting.
INFO[0202] Pushed <registry-uri-5>
INFO[0202] Pushing image to <image-6>
INFO[0203] Pushed <registry-uri-7>
✔ built and uploaded app container image to DOCR
─────────────────────────────────────────╼
✔ build complete
]]>The problem is when I GET the image and pass it into image tag’s src attributes it doesn’t work
<img src="tmp/folder/folder/30eb2c5251413f9ec49457bcab827db3" alt="">
this doesn't display the image
I don’t know what I am doing wrong here
]]>I’ve read a lot about it and just have a couple questions before I ultimately make the leap.
My setup uses a master process which spawns many worker processes. The workers are the “game servers” - where all the game logic happens. The master process just handles user management, this process knows where each user is when I need to connect two players together who are on different game servers.
When I make Pods, should they run the “master” process which spawns a bunch of workers, or should one Pod just run one of the “workers”?
How will Pods communicate with one another? It seems like Redis is required somehow, but I’m not too sure. (and if Redis IS required, would DigitalOcean Managed Redis make it easier, or would just installing Redis be better?)
I use Nginx as a reverse proxy that connects users to one of the “workers” in my setup. Is there a strategy to connect users to the Pod with the least load? Or at least to a random Pod?
If you could help it would mean a lot, thank you for reading :D
]]>Prisma is an open-source ORM for Node.js and TypeScript. It consists of three main tools:
These tools aim to increase an application developer’s productivity in their database workflows. One of the top benefits of Prisma is the level of abstraction it provides: Instead of figuring out complex SQL queries or schema migrations, application developers can reason about their data in a more intuitive way when using Prisma.
In this tutorial, you will build a REST API for a small blogging application in TypeScript using Prisma and a PostgreSQL database. You will set up your PostgreSQL database locally with Docker and implement the REST API routes using Express. At the end of the tutorial, you will have a web server running locally on your machine that can respond to various HTTP requests and read and write data in the database.
This tutorial assumes the following:
Basic familiarity with TypeScript and REST APIs is helpful but not required for this tutorial.
In this step, you will set up a plain TypeScript project using npm
. This project will be the foundation for the REST API you’re going to build in this tutorial.
First, create a new directory for your project:
- mkdir my-blog
Next, navigate into the directory and initialize an empty npm
project. Note that the -y
option here means that you’re skipping the interactive prompts of the command. To run through the prompts, remove -y
from the command:
- cd my-blog
- npm init -y
For more details on these prompts, you can follow Step 1 in How To Use Node.js Modules with npm and package.json.
You’ll receive output similar to the following with the default responses in place:
OutputWrote to /.../my-blog/package.json:
{
"name": "my-blog",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
This command creates a minimal package.json
file that you use as the configuration file for your npm
project. You’re now ready to configure TypeScript in your project.
Execute the following command for a plain TypeScript setup:
- npm install typescript ts-node @types/node --save-dev
This installs three packages as development dependencies in your project:
typescript
: The TypeScript toolchain.ts-node
: A package to run TypeScript applications without prior compilation to JavaScript.@types/node
: The TypeScript type definitions for Node.js.The last thing to do is to add a tsconfig.json
file to ensure TypeScript is properly configured for the application you’re going to build.
First, run the following command to create the file:
- nano tsconfig.json
Add the following JSON code into the file:
{
"compilerOptions": {
"sourceMap": true,
"outDir": "dist",
"strict": true,
"lib": ["esnext"],
"esModuleInterop": true
}
}
Save and exit the file.
This setup is a standard and minimal configuration for a TypeScript project. If you want to learn about the individual properties of the configuration file, you can review the TypeScript documentation.
You’ve set up your plain TypeScript project using npm
. Next you’ll set up your PostgreSQL database with Docker and connect Prisma to it.
In this step, you will install the Prisma CLI, create your initial Prisma schema file, and set up PostgreSQL with Docker and connect Prisma to it. The Prisma schema is the main configuration file for your Prisma setup and contains your database schema.
Start by installing the Prisma CLI with the following command:
- npm install prisma --save-dev
As a best practice, it is recommended to install the Prisma CLI locally in your project (rather than as a global installation). This practice helps avoid version conflicts in case you have more than one Prisma project on your machine.
Next, you’ll set up your PostgreSQL database using Docker. Create a new Docker Compose file with the following command:
- nano docker-compose.yml
Now add the following code to the newly created file:
version: '3.8'
services:
postgres:
image: postgres:10.3
restart: always
environment:
- POSTGRES_USER=sammy
- POSTGRES_PASSWORD=your_password
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
This Docker Compose file configures a PostgreSQL database that can be accessed via port 5432
of the Docker container. The database credentials are currently set as sammy
(user) and your_password
(password). Feel free to adjust these credentials to your preferred user and password. Save and exit the file.
With this setup in place, launch the PostgreSQL database server with the following command:
- docker-compose up -d
The output of this command will be similar to this:
OutputPulling postgres (postgres:10.3)...
10.3: Pulling from library/postgres
f2aa67a397c4: Pull complete
6de83ca23e55: Pull complete
. . .
Status: Downloaded newer image for postgres:10.3
Creating my-blog_postgres_1 ... done
You can verify that the database server is running with the following command:
- docker ps
This command will output something similar to this:
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8547f8e007ba postgres:10.3 "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 0.0.0.0:5432->5432/tcp my-blog_postgres_1
With the database server running, you can now create your Prisma setup. Run the following command from the Prisma CLI:
- npx prisma init
This command will print the following output:
Output✔ Your Prisma schema was created at prisma/schema.prisma.
You can now open it in your favorite editor.
As a best practice, you should prefix all invocations of the Prisma CLI with npx
to ensure your local installation is being used.
After you run the command, the Prisma CLI creates a new folder called prisma
in your project. Inside it, you will find a schema.prisma
file, which is the main configuration file for your Prisma project (including your data model). This command also adds a .env
dotenv file to your root folder, which is where you will define your database connection URL.
To ensure Prisma knows about the location of your database, open the .env
file and adjust the DATABASE_URL
environment variable.
First open the .env
file:
- nano .env
Now you can update the environment variable as follows:
DATABASE_URL="postgresql://sammy:your_password@localhost:5432/my-blog?schema=public"
Make sure to change the database credentials to the ones you specified in the Docker Compose file. To learn more about the format of the connection URL, visit the Prisma docs.
Once you’re done, save and exit the file.
In this step, you set up your PostgreSQL database with Docker, installed the Prisma CLI, and connected Prisma to the database via an environment variable. In the next section, you’ll define your data model and create your database tables.
In this step, you will define your data model in the Prisma schema file. This data model will then be mapped to the database with Prisma Migrate, which will generate and send the SQL statements for creating the tables that correspond to your data model. Since you’re building a blogging application, the main entities of the application will be users and posts.
Prisma uses its own data modeling language to define the shape of your application data.
First, open your schema.prisma
file with the following command:
- nano prisma/schema.prisma
Now, add the following model definitions to it. You can place the models at the bottom of the file, right after the generator client
block:
. . .
model User {
id Int @default(autoincrement()) @id
email String @unique
name String?
posts Post[]
}
model Post {
id Int @default(autoincrement()) @id
title String
content String?
published Boolean @default(false)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
}
You are defining two models: User
and Post
. Each of these has a number of fields that represent the properties of the model. The models will be mapped to database tables; the fields represent the individual columns.
There is a one-to-many relation between the two models, specified by the posts
and author
relation fields on User
and Post
. This means that one user can be associated with many posts.
Save and exit the file.
With these models in place, you can now create the corresponding tables in the database using Prisma Migrate. In your terminal, run the following command:
- npx prisma migrate dev --name init
This command creates a new SQL migration on your filesystem and sends it to the database. The --name init
option provided to the command specifies the name of the migration and will be used to name the migration folder created on your filesystem.
The output of this command will be similar to this:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "my-blog", schema "public" at "localhost:5432"
PostgreSQL database my-blog created at localhost:5432
The following migration(s) have been created and applied from new schema changes:
migrations/
└─ 20201209084626_init/
└─ migration.sql
Running generate... (Use --skip-generate to skip the generators)
✔ Generated Prisma Client (2.13.0) to ./node_modules/@prisma/client in 75ms
The SQL migration file in the prisma/migrations/20201209084626_init/migration.sql
directory has the following statements that were executed against the database (the highlighted portion of the filename may differ in your setup):
-- CreateTable
CREATE TABLE "User" (
"id" SERIAL,
"email" TEXT NOT NULL,
"name" TEXT,
PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Post" (
"id" SERIAL,
"title" TEXT NOT NULL,
"content" TEXT,
"published" BOOLEAN NOT NULL DEFAULT false,
"authorId" INTEGER,
PRIMARY KEY ("id")
);
-- CreateIndex
CREATE UNIQUE INDEX "User.email_unique" ON "User"("email");
-- AddForeignKey
ALTER TABLE "Post" ADD FOREIGN KEY("authorId")REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
You can also customize the generated SQL migration file if you add the --create-only
option to the prisma migrate dev
command; for example, you could set up a trigger or use other features of the underlying database.
In this step, you defined your data model in your Prisma schema and created the respective databases tables with Prisma Migrate. In the next step, you’ll install Prisma Client in your project so that you can query the database.
Prisma Client is an auto-generated and type-safe query builder that you can use to programmatically read and write data in a database from a Node.js or TypeScript application. You will use it for database access within your REST API routes, replacing traditional ORMs, plain SQL queries, custom data access layers, or any other method of talking to a database.
In this step, you will install Prisma Client and become familiar with the queries you can send with it. Before implementing the routes for your REST API in the next steps, you will first explore some of the Prisma Client queries in a plain, executable script.
First, install Prisma Client in your project folder with the Prisma Client npm
package:
- npm install @prisma/client
Next, create a new directory called src
that will contain your source files:
- mkdir src
Now create a TypeScript file inside of the new directory:
- nano src/index.ts
All of the Prisma Client queries return promises that you can await
in your code. This requires you to send the queries inside of an async
function.
In the src/index.ts
file, add the following boilerplate with an async
function that’s executed in your script:
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
async function main() {
// ... your Prisma Client queries will go here
}
main()
.catch((e) => console.error(e))
.finally(async () => await prisma.$disconnect())
Here’s a quick breakdown of the boilerplate:
PrismaClient
constructor from the previously installed @prisma/client
npm
package.PrismaClient
by calling the constructor and obtaining an instance called prisma
.async
function called main
where you’ll add your Prisma Client queries.main
function, catching any potential exceptions and ensuring Prisma Client closes any open database connections with prisma.$disconnect()
.With the main
function in place, you can start adding Prisma Client queries to the script. Adjust index.ts
to include the highlighted lines in the async function:
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
async function main() {
const newUser = await prisma.user.create({
data: {
name: 'Alice',
email: 'alice@prisma.io',
posts: {
create: {
title: 'Hello World',
},
},
},
})
console.log('Created new user: ', newUser)
const allUsers = await prisma.user.findMany({
include: { posts: true },
})
console.log('All users: ')
console.dir(allUsers, { depth: null })
}
main()
.catch((e) => console.error(e))
.finally(async () => await prisma.$disconnect())
In this code, you’re using two Prisma Client queries:
create
: Creates a new User
record. You use a nested write query to create both a User
and Post
record in the same query.findMany
: Reads all existing User
records from the database. You provide the include
option that additionally loads the related Post
records for each User
record.Save and close the file.
Now run the script with the following command:
- npx ts-node src/index.ts
You will receive the following output in your terminal:
OutputCreated new user: { id: 1, email: 'alice@prisma.io', name: 'Alice' }
[
{
id: 1,
email: 'alice@prisma.io',
name: 'Alice',
posts: [
{
id: 1,
title: 'Hello World',
content: null,
published: false,
authorId: 1
}
]
}
Note: If you are using a database GUI you can validate that the data was created by reviewing the User
and Post
tables. Alternatively, you can explore the data in Prisma Studio by running npx prisma studio
.
You’ve now used Prisma Client to read and write data in your database. In the remaining steps, you’ll implement the routes for a sample REST API.
In this step, you will install Express in your application. Express is a popular web framework for Node.js that you will use to implement your REST API routes in this project. The first route you will implement will allow you to fetch all users from the API using a GET
request. The user data will be retrieved from the database using Prisma Client.
Install Express with the following command:
- npm install express
Since you’re using TypeScript, you’ll also want to install the respective types as development dependencies. Run the following command to do so:
- npm install @types/express --save-dev
With the dependencies in place, you can set up your Express application.
Open your main source file again:
- nano src/index.ts
Now delete all the code in index.ts
and replace it with the following to start your REST API:
import { PrismaClient } from '@prisma/client'
import express from 'express'
const prisma = new PrismaClient()
const app = express()
app.use(express.json())
// ... your REST API routes will go here
app.listen(3000, () =>
console.log('REST API server ready at: http://localhost:3000'),
)
Here’s a quick breakdown of the code:
PrismaClient
and express
from the respective npm
packages.PrismaClient
by calling the constructor and obtaining an instance called prisma
.express()
.express.json()
middleware to ensure JSON data can be processed properly by Express.3000
.Now you can implement your first route. Between the calls to app.use
and app.listen
, add the highlighted lines to create an app.get
call:
. . .
app.use(express.json())
app.get('/users', async (req, res) => {
const users = await prisma.user.findMany()
res.json(users)
})
app.listen(3000, () =>
console.log('REST API server ready at: http://localhost:3000'),
)
Once added, save and exit your file. Then start your local web server using the following command:
- npx ts-node src/index.ts
You will receive the following output:
OutputREST API server ready at: http://localhost:3000
To access the /users
route you can point your browser to http://localhost:3000/users
or any other HTTP client.
In this tutorial, you will test all REST API routes using curl
, a terminal-based HTTP client.
Note: If you prefer to use a GUI-based HTTP client, you can use alternatives like Hoppscotch or Postman.
To test your route, open up a new terminal window or tab (so that your local web server can keep running) and execute the following command:
- curl http://localhost:3000/users
You will receive the User
data that you created in the previous step:
Output[{"id":1,"email":"alice@prisma.io","name":"Alice"}]
The posts
array is not included this time because you’re not passing the include
option to the findMany
call in the implementation of the /users
route.
You’ve implemented your first REST API route at /users
. In the next step you will implement the remaining REST API routes to add more functionality to your API.
In this step, you will implement the remaining REST API routes for your blogging application. At the end, your web server will serve various GET
, POST
, PUT
, and DELETE
requests.
The routes you will implement include the following options:
HTTP Method | Route | Description |
---|---|---|
GET |
/feed |
Fetches all published posts. |
GET |
/post/:id |
Fetches a specific post by its ID. |
POST |
/user |
Creates a new user. |
POST |
/post |
Creates a new post (as a draft). |
PUT |
/post/publish/:id |
Sets the published field of a post to true . |
DELETE |
post/:id |
Deletes a post by its ID. |
You will implement the two remaining GET
routes first.
You can stop the server by pressing CTRL+C
on your keyboard. Then, you can update your index.ts
file by first opening the file for editing:
- nano src/index.ts
Next, add the highlighted lines following the implementation of the /app.get
users route:
. . .
app.get('/feed', async (req, res) => {
const posts = await prisma.post.findMany({
where: { published: true },
include: { author: true }
})
res.json(posts)
})
app.get(`/post/:id`, async (req, res) => {
const { id } = req.params
const post = await prisma.post.findUnique({
where: { id: Number(id) },
})
res.json(post)
})
app.listen(3000, () =>
console.log('REST API server ready at: http://localhost:3000'),
)
This code implements the API routes for two GET
requests:
/feed
: Returns a list of published posts./post/:id
: Returns a specific post by its ID.Prisma Client is used in both implementations. In the /feed
route implementation, the query you send with Prisma Client filters for all Post
records where the published
column contains the value true
. Additionally, the Prisma Client query uses include
to also fetch the related author
information for each returned post. In the /post/:id
route implementation, you pass the ID that is retrieved from the URL’s path in order to read a specific Post
record from the database.
Save and exit your file. Then, restart the server using:
- npx ts-node src/index.ts
To test the /feed
route, you can use the following curl
command in your second terminal session:
- curl http://localhost:3000/feed
Since no posts have been published yet, the response is an empty array:
Output[]
To test the /post/:id
route, you can use the following curl
command:
- curl http://localhost:3000/post/1
This command will return the post you initially created:
Output{"id":1,"title":"Hello World","content":null,"published":false,"authorId":1}
Next, you will implement the two POST
routes. In your original terminal session, stop the server with CTRL+C
, then open index.ts
for editing:
- nano src/index.ts
Add the highlighted lines to index.ts
following the implementations of the three GET
routes:
. . .
app.post(`/user`, async (req, res) => {
const result = await prisma.user.create({
data: { ...req.body },
})
res.json(result)
})
app.post(`/post`, async (req, res) => {
const { title, content, authorEmail } = req.body
const result = await prisma.post.create({
data: {
title,
content,
published: false,
author: { connect: { email: authorEmail } },
},
})
res.json(result)
})
app.listen(3000, () =>
console.log('REST API server ready at: http://localhost:3000'),
)
This code implements the API routes for two POST
requests:
/user
: Creates a new user in the database./post
: Creates a new post in the database.Like before, Prisma Client is used in both implementations. In the /user
route implementation, you pass in the values from the body of the HTTP request to the Prisma Client create
query.
The /post
route is a more involved. You can’t directly pass in the values from the body of the HTTP request; instead you first need to extract them manually to pass them to the Prisma Client query. Because the structure of the JSON in the request body does not match the structure that’s expected by Prisma Client, you need to create the expected structure manually.
Once you’re done, save and exit your file.
Restart the server using:
- npx ts-node src/index.ts
To create a new user via the /user
route, you can send the following POST
request with curl
:
- curl -X POST -H "Content-Type: application/json" -d '{"name":"Bob", "email":"bob@prisma.io"}' http://localhost:3000/user
This will create a new user in the database, printing the following output:
Output{"id":2,"email":"bob@prisma.io","name":"Bob"}
To create a new post via the /post
route, you can send the following POST
request with curl
:
- curl -X POST -H "Content-Type: application/json" -d '{"title":"I am Bob", "authorEmail":"bob@prisma.io"}' http://localhost:3000/post
This will create a new post in the database and connect it to the user with the email bob@prisma.io
. It prints the following output:
Output{"id":2,"title":"I am Bob","content":null,"published":false,"authorId":2}
Finally, you will implement the PUT
and DELETE
routes. Stop the development server, then open up index.ts
with the following command:
- nano src/index.ts
Next, following the implementation of the two POST
routes, add the highlighted code:
. . .
app.put('/post/publish/:id', async (req, res) => {
const { id } = req.params
const post = await prisma.post.update({
where: { id: Number(id) },
data: { published: true },
})
res.json(post)
})
app.delete(`/post/:id`, async (req, res) => {
const { id } = req.params
const post = await prisma.post.delete({
where: { id: Number(id) },
})
res.json(post)
})
app.listen(3000, () =>
console.log('REST API server ready at: http://localhost:3000'),
)
This code implements the API routes for one PUT
and one DELETE
request:
/post/publish/:id
(PUT
): Publishes a post by its ID./post/:id
(DELETE
): Deletes a post by its ID.Again, Prisma Client is used in both implementations. In the /post/publish/:id
route implementation, the ID of the post to be published is retrieved from the URL and passed to the update
query of Prisma Client. The implementation of the /post/:id
route to delete a post in the database also retrieves the post ID from the URL and passes it to the delete
query of Prisma Client.
Save and exit your file.
Restart the server using:
- npx ts-node src/index.ts
You can test the PUT
route with the following curl
command:
- curl -X PUT http://localhost:3000/post/publish/2
This command will publish the post with an ID value of 2
. If you resend the /feed
request, this post will now be included in the response.
Finally, you can test the DELETE
route with the following curl
command:
- curl -X DELETE http://localhost:3000/post/1
This command will delete the post with an ID value of 1
. To validate that the post with this ID has been deleted, you can resend a GET
request to the /post/1
route with the following curl
command:
- curl http://localhost:3000/post/1
In this step, you implemented the remaining REST API routes for your blogging application. The API now responds to various GET
, POST
, PUT
, and DELETE
requests and implements functionality to read and write data in the database.
In this article, you created a REST API server with a number of different routes to create, read, update, and delete user and post data for a sample blogging application. Inside the API routes, you use the Prisma Client to send the respective queries to your database.
As next steps, you can implement additional API routes or extend your database schema using Prisma Migrate. Visit the Prisma documentation to learn about different aspects of Prisma and explore some ready-to-run example projects using tools such as GraphQL or grPC APIs in the prisma-examples
repository.
Unique identifiers (UIDs), or identifiers, can be a string value or an integer, and API developers often use them to address unique resources in an API. API consumers then use these identifiers to fetch a single resource from a collection of resources. Without a unique identifier, separating the resources and calling them as required is almost impossible. Identifiers can refer to database structural elements, like the names of tables, fields (columns) in a table, or constraints, and can be further specified to a unique item in the database. For example, in a database related to a hotel booking portal, Hotel(id)
might point to an identifier that refers to a unique hotel. With Hotel(id=1234, name="Hyatt")
, you would be able to identify this specific hotel by the ID 1234
or the name "Hyatt"
.
In API Design Patterns, John J. Geewax identifies seven fundamental characteristics to a good identifier. These characteristics are important to consider when generating a unique ID:
/
) as these characters perform specific meaning in URLs.1
, lowercase L
, uppercase I
, or the pipe character (|
) as these characters may create confusion if someone needs to check the ID manually.Note: Changing the identifier can create unexpected confusion. If you have an identifier that specifies Hotel(id=1234, name="Hyatt")
and that later changes to Hotel(id=5678, name="Hyatt")
, the prior ID might be available for reuse. If the prior identifier is available and a new hotel is created as Hotel(id=1234, name="Grand Villa")
, this new hotel reuses the original identifier (1234
). Then, when you ask for hotel 1234
, you may receive different results than expected.
In this tutorial, you will generate a unique custom resource identifier fulfilling these characteristics and an associated checksum using Node.JS. A checksum is a hash of a digital fingerprint of a file or digital data obtained using a hash function on the digital object. This tutorial’s checksum will be a single alphanumeric character derived by an algorithmic process of encoding (or hashing) on the size of bytes corresponding to your resource.
Before you begin this tutorial, you’ll need the following:
nano
.In this step, you will write a function to generate an identifier from random bytes into a unique alphanumeric string. Your identifier will be encoded using base32 encoding, but it will not have a checksum affiliated until later in the tutorial. The encoding process will create a unique identifier of a specified length based on the number of bytes you choose, building an ID that incorporates some of the characteristics of a good ID.
Start by making a new folder for this project, then move into that folder:
- mkdir checksum
- cd checksum
The project folder will be called checksum
for this tutorial.
Create and open a package.json
file in your project folder (using your favorite editor):
- nano package.json
Then add the following lines of code:
{
"name": "checksum",
"version": "1.0.0",
"main": "index.js",
"type": "module"
}
In this file, you define the project name as checksum
, and you consider the code version "1.0.0"
. You define the main JavaScript file as index.js
. When you have "type": "module"
in the package.json
file, your source code should use import syntax. In this file, you use the JSON data format, which you can learn more about in How to Work with JSON in JavaScript.
Save and close the file.
You’ll use a few Node.js modules to generate the ID: crypto
and base32-encode
, with its corresponding decoder base32-decode
. The crypto
module is packaged with Node.JS, but you will need to install base32-encode
and base32-decode
for use later in this tutorial. Encoding is putting a sequence of characters (letters, numbers, punctuation, and certain symbols) into a specialized format for efficient transmission or storage. Decoding is the opposite process: converting an encoded format back into the original sequence of characters. Base32 encoding uses a 32-character set, which makes it a textual 32-symbol notation for expressing numbers.
In a terminal session, install these module packages in the project folder with the following command:
- npm i base32-encode base32-decode
You will receive an output that indicates these modules have been added:
Outputadded 3 packages, and audited 5 packages in 2s
found 0 vulnerabilities
If you encounter issues during installation, you can refer to How To Use Node.js Modules with npm and package.json for support.
Still in your project folder, create a new file called index.js
:
- nano index.js
Add the following lines of JavaScript code to the index.js
file:
import crypto from 'crypto';
import base32Encode from 'base32-encode';
import base32Decode from 'base32-decode';
function generate_Id(byte_size) {
const bytes = crypto.randomBytes(byte_size);
return base32Encode(bytes, 'Crockford');
}
console.log('ID for byte size = 1:',generate_Id(1), '\n');
console.log('ID for byte size = 12:',generate_Id(12), '\n');
console.log('ID for byte size = 123:',generate_Id(123), '\n');
The import
command loads the required modules. To generate bytes from the number, you define a generate_Id
function to take the bytes’ size of bytes and then create random bytes of this size using the randomBytes
function from the crypto
module. The generate_Id
function then encodes these bytes using the Crockford implementation of base32 encoding.
For instructional purposes, a few IDs are generated and then logged to the console. The base32-decode
module will be used to decode the resource ID in the next steps.
Save your index.js
file, then run the code in a terminal session with this command:
node index.js
You will receive an output response similar to this:
OutputID for byte size = 1: Y8
ID for byte size = 12: JTGSEMQH2YZFD3H35HJ0
ID for byte size = 123: QW2E2KJKM8QZ7174DDB1Q3JMEKV7328EE8T79V1KG0TEAE67DEGG1XS4AR57FPCYTS24J0ZRR3E6TKM28AM8FYZ2AZTZ55C9VVQTABE0R7QRH7QBY7V3GBYBNN5D9JK0QMD9NXSWZN95S0772DHN43Q003G0QNTPA2J3AFA3P7Q167C1VNR92Z85PCDXCMEY0M7WA
Your ID values might differ due to the randomness of generated bytes. The generated ID may be shorter or longer in length, depending on the byte size you select.
Back in index.js
, comment out the console outputs using the JavaScript commenting feature (adding a double slash //
before the line):
...
//console.log('ID for byte size = 1:',generate_Id(1), '\n');
//console.log('ID for byte size = 12:',generate_Id(12), '\n');
//console.log('ID for byte size = 123:',generate_Id(123), '\n');
These lines demonstrate how encoding will output different identifiers based on the bytes associated. Because these lines will not be used in the following sections, you can comment them out as demonstrated in this code block or delete them entirely.
In this step, you created an encoded ID by encoding random bytes. In the next step, you will combine the encoded bytes and a checksum, creating a unique identifier.
Now you will create an ID with a checksum character. Generating the checksum character is a two step process. For instructional purposes, each function that creates the composite function will be built separately in the following subsections. First, you will write a function that runs a modulo operation. Then, you will write another function that maps the results to a checksum character, which is how you will generate the checksum for your resource ID. Finally, you will verify the identifier and checksum to ensure that the resource identifier is accurate.
In this section, you will convert the bytes corresponding to the number ID to a number between 0-36 (limits inclusive, which means any number between 0
to 36
, including 0
and 36
). The bytes corresponding to the number ID are converted to an integer as a result of a modulo operation. The modulo operation will return the remainder of the dividend obtained by converting the bytes to BigInteger (BigInt
) values.
To implement this procedure, add the following lines of code to the bottom of the index.js
file:
...
function calculate_checksum(bytes) {
const intValue = BigInt(`0x${bytes.toString('hex')}`);
return Number(intValue % BigInt(37));
}
The function calculate_checksum
works with the bytes defined earlier in the file. This function will convert bytes to hexadecimal values, which are further converted to BigInteger BigInt
values. The BigInt
data type represents numbers greater than those represented by the primitive data type number
in Javascript
. For example, although integer 37
is relatively small, it is converted to BigInt
for the modulo operation.
To achieve this conversion, you first set the intValue
variable with the BigInt
conversion method, using the toString
method to set bytes
to hex
. Then, you return a numerical value with the Number
constructor, in which you run the modulo operation with the %
symbol to find the remainder between the intValue
and BigInt
using the sample value of 37
. That integer value (in this example, 37
) acts as an index to select an alphanumeric character from a custom-built string of alphanumeric characters.
If intValue
value is 123
(depending on the bytes
), the module operation will be 123 % 37
. The result of this operation with 37
as the integer value will be a remainder of 12
and a quotient of 3
. With a value of 154
for the resource ID, the operation 154 % 37
will result in a remainder of 6
.
This function maps the incoming bytes to the modulo result. Next, you will write a function to map the modulo result to a checksum character.
After obtaining the modulo result in the previous section, you can map it to a checksum character.
Add the following lines of code to the index.js
file just below the previous code:
...
function get_checksum_character(checksumValue) {
const alphabet = '0123456789ABCDEFG' +
'HJKMNPQRSTVWXYZ*~$=U';
return alphabet[Math.abs(checksumValue)]; //
}
For the function get_checksum_character
, you call checksumValue
as a parameter. Within this function, you define a string constant named alphabet
as a custom-built string of alphanumeric characters. Depending on the value set for the checksumValue
, this function will return a value that pairs the defined string from the alphabet
constant with the absolute value of the checksumValue
.
Next, you will write a function that uses the two functions written in these sections to generate an ID from the encoding of bytes combined with a checksum character.
Add the following lines of code to the index.js
file:
...
function generate_Id_with_checksum(bytes_size) {
const bytes = crypto.randomBytes(bytes_size);
const checksum = calculate_checksum(bytes);
const checksumChar = get_checksum_character(checksum);
console.log("checksum character: ", checksumChar);
const encoded = base32Encode(bytes, 'Crockford');
return encoded + checksumChar;
}
const Hotel_resource_id =generate_Id_with_checksum(132)
console.log("Hotel resource id: ",Hotel_resource_id)
This section of code combines your two previous functions, calculate_checksum
and get_checksum_character
(which are used to generate checksum characters), with the encoding function into a new function aptly named generate_Id_with_checksum
that will create an ID with a checksum character.
Save the file, then run the code in a separate terminal session:
- node index.js
You will receive an output similar to this:
Output
checksum character: B
Hotel resource id: 9V99B9P55K7M4DN5XYP4VTJYJGENZKJ0F9Q6EEEZ07X49G0V14AXJS3RYXBT3J1WJZXWGM76C6H7G895TJT27AW77BHBX2D16QNQ2ZNBY9MQHWG9NJ1WWVTNRCKRBX6HC3M7BB3JG0V413VJ767JN6FT0GFS5VQJ9X7KSP1KM29B02NAGXN3FP30WA8Y63N1XJAMGDPEE1RNHRTWH6P0B
The same checksum character appears at the end of the ID, indicating that the checksums match.
This schematic chart provides a structural representation of how this composite function works:
This flowchart demonstrates how a product ID, which is an identifier that was manually created by a counter for the resource, is transformed into a unique resource ID through the encoding and modulo process. The crypto method
in the diagram refers to the crypto.randomBytes()
function.
You created an ID based on the byte size that will include the checksum character. In the next section, you will implement an identifier
function to verify the integrity of the ID with base32-decoding
.
To ensure integrity, you will now compare the checksum character (the last character from the identifier) with the checksum generated, using a new function called verify_Id
. Comparing the checksum character is an essential step to check the integrity of the original ID and ascertain that it has not been tampered with.
Add these lines to your index.js
file:
...
function verify_Id(identifier) {
const value = identifier.substring( 0, identifier.length-1);
const checksum_char = identifier[identifier.length-1];
const buffer = Buffer.from( base32Decode(value, 'Crockford'));
const calculated_checksum_char = get_checksum_character(calculate_checksum(buffer));
console.log(calculated_checksum_char);
const flag =calculated_checksum_char== checksum_char;
return (flag);
}
console.log('\n');
console.log("computing checksum")
const flag = verify_Id(Hotel_resource_id);
if (flag) console.log("Checksums matched.");
else console.log("Checksums did not match.");
The verify_Id
function checks the integrity of the ID by checking the checksum. The remaining characters of the identifier are decoded into a buffer, and then calculate_checksum
and get_checksum_character
are run subsequently on this buffer to extract the checksum character for the comparison (with calculated_checksum_char== checksum_char
).
This schematic chart demonstrates how the composite function works:
In this chart, slicing refers to separating the ID value (value
) from the checksum character (checksum
). In your earlier code block, the function identifier.substring( 0, identifier.length-1)
picks up the the ID value, whereas identifier[identifier.length-1]
takes the last character from the resource ID.
Your index.js
file should now match the following code:
import crypto from 'crypto'; // for generating bytes from the number
import base32Encode from 'base32-encode'; // for encoding the bytes into Unique ID as string type
import base32Decode from 'base32-decode';// for decoding the ID into bytes
function generate_Id(byte_size) {
const bytes = crypto.randomBytes(byte_size);
return base32Encode(bytes, 'Crockford');
}
//console.log('ID for byte size = 1:',generate_Id(1), '\n');
//console.log('ID for byte size = 12:',generate_Id(12), '\n');
//console.log('ID for byte size = 123:',generate_Id(123), '\n');
function calculate_checksum(bytes) {
const intValue = BigInt(`0x${bytes.toString('hex')}`);
return Number(intValue % BigInt(37));
}
function get_checksum_character(checksumValue) {
const alphabet = '0123456789ABCDEFG' +
'HJKMNPQRSTVWXYZ*~$=U'; // custom-built string consisting of alphanumeric character
return alphabet[Math.abs(checksumValue)]; // picking out an alphanumeric character
}
function generate_Id_with_checksum(bytes_size) {
const bytes = crypto.randomBytes(bytes_size);
const checksum = calculate_checksum(bytes);
const checksumChar = get_checksum_character(checksum);
console.log("checksum character: ", checksumChar);
const encoded = base32Encode(bytes, 'Crockford');
return encoded + checksumChar;
}
const Hotel_resource_id =generate_Id_with_checksum(132)
console.log("Hotel resource id: ",Hotel_resource_id)
function verify_Id(identifier) {
const value = identifier.substring( 0, identifier.length-1);
const checksum_char = identifier[identifier.length-1];
//console.log(value,checksum_char);
const buffer = Buffer.from( base32Decode(value, 'Crockford'));
const calculated_checksum_char = get_checksum_character(calculate_checksum(buffer));
console.log(calculated_checksum_char);
const flag =calculated_checksum_char== checksum_char;
return (flag);
}
console.log('\n');
console.log("computing checksum")
const flag = verify_Id(Hotel_resource_id);
if (flag) console.log("Checksums matched.");
else console.log("Checksums did not match.");
Now you can run this code:
node index.js
You will receive the following output:
Output...
computing checksum
AW75SY7FVC7TKT7VP5ZF0M8C67CN36YZK27BXHVFHSDXJFKH54HK2AXQFMPN89Q5YQRPGNHGAYQ5JFKVD40EKTXCET97Q0FEPX6MX1ZTNWGCA08SBRSHP8B0037ACJG6F6472FEVARCAWM6P5MRJ2F6WTRPXHYS9N1JEDZVH41D33RA5365VNFC5G5VYEFPFJJD8151B28XXDBRHAF80 H
H
Checksums matched.
You now have a function called verify_Id
that checks the integrity of your identifier using a checksum character. Next, for instructional purposes, you can alter the resource ID so that the function will give a non-matching result to assess what happens when a check fails.
You will now alter the value for the identifier to check if the checksums will get matched. The alteration in this step will always result in a non-matching checksum, as the integrity is not maintained if any character in the ID is manipulated. An alteration like this may result from transmission errors or malicious behavior. This alteration is for instructional purposes and is not recommended for production builds but will enable you to assess a non-matching checksum result.
In your index.js
file, modify the Hotel_resource_id
by adding the highlighted lines:
...
const altered_Hotel_resource_id= Hotel_resource_id.replace('P','H');
console.log("computing checksum")
const flag = verify_Id(altered_Hotel_resource_id);
if (flag) console.log("Checksum matched.");
else console.log("Checksums did not match.");
In the above code, you replace any P
with H
in the ID and rename the variable from Hotel_resource_ID
to altered_Hotel_resource_id
. Again, these changes are for informational purposes and can be reverted at the end of this step to ensure matching integrity.
Save the file, then rerun the code with the alterations for the resource ID:
- node index.js
You will receive an output that the checksums do not match:
OutputChecksums did not match.
In this step, you created a function to verify if the checksum passes the integrity test or not, and you encountered both cases. The non-matching checksum indicates that the resource ID has been manipulated. The notification enables a developer to take action against malicious behavior, such as blocking a user request or reporting the request related to the resource ID, depending upon the application requirements.
To revert your function to the matching checksums result, delete the additional code added at the beginning of this step so that the code matches the file at the end of Step 2.
When you need a custom unique ID with a checksum, you can use this tutorial to help you generate data models, version your APIs, and more.
In this tutorial, you developed resource IDs that align with the characteristics of a good identifier. You also created a unique resource ID with checksum in a Node.js environment using base32-encoding
. Finally, you verified the integrity of the ID by decoding it with base32-decoding
.
For cross-confirmation, you can compare your final files with those in the DigitalOcean community repository. You can also git-clone
the repository if you are familiar with the git
versioning system or by following the Introduction to GitHub and Open-Source Projects series.
Now that you understand the fundamentals of checksum, you can experiment with other encoding algorithms, such as MD5.
]]>If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:
This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers — one for the Node application and another for the MongoDB database — with Docker Compose. Because this application works with Node and MongoDB, your setup will do the following:
At the end of this tutorial, you will have a working shark information application running on Docker containers:
To follow this tutorial, you will need:
sudo
privileges and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.The first step in building this setup will be cloning the project code and modifying its package.json
file, which includes the project’s dependencies. You will add nodemon
to the project’s devDependencies
, specifying that you will be using it during development. Running the application with nodemon
ensures that it will be automatically restarted whenever you make changes to your code.
First, clone the nodejs-mongo-mongoose
repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Integrate MongoDB with Your Node Application, which explains how to integrate a MongoDB database with an existing Node application using Mongoose.
Clone the repository into a directory called node_project
:
- git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project
Navigate to the node_project
directory:
- cd node_project
Open the project’s package.json
file using nano
or your favorite editor:
- nano package.json
Beneath the project dependencies and above the closing curly brace, create a new devDependencies
object that includes nodemon
:
...
"dependencies": {
"ejs": "^2.6.1",
"express": "^4.16.4",
"mongoose": "^5.4.10"
},
"devDependencies": {
"nodemon": "^1.18.10"
}
}
Save and close the file when you are finished editing. If you’re using nano
, press CTRL+X
, then Y
, then ENTER
.
With the project code in place and its dependencies modified, you can move on to refactoring the code for a containerized workflow.
Modifying your application for a containerized workflow means making your code more modular. Containers offer portability between environments, and your code should reflect that by remaining as decoupled from the underlying operating system as possible. To achieve this, you will refactor your code to make greater use of Node’s process.env property. This returns an object with information about your user environment at runtime. You can use this object in your code to dynamically assign configuration information at runtime with environment variables.
Begin with app.js
, your main application entrypoint. Open the file:
- nano app.js
Inside, you will see a definition for a port
constant, as well a listen
function that uses this constant to specify the port the application will listen on:
...
const port = 8080;
...
app.listen(port, function () {
console.log('Example app listening on port 8080!');
});
Redefine the port
constant to allow for dynamic assignment at runtime using the process.env
object. Make the following changes to the constant definition and listen
function:
...
const port = process.env.PORT || 8080;
...
app.listen(port, function () {
console.log(`Example app listening on ${port}!`);
});
Your new constant definition assigns port
dynamically using the value passed in at runtime or 8080
. Similarly, you’ve rewritten the listen
function to use a template literal, which will interpolate the port value when listening for connections. Because you will be mapping your ports elsewhere, these revisions will prevent you having to continuously revise this file as your environment changes.
When you are finished editing, save and close the file.
Next, you will modify your database connection information to remove any configuration credentials. Open the db.js
file, which contains this information:
- nano db.js
Currently, the file does the following things:
mongoose.connect
method.For more information about the file, please see Step 3 of How To Integrate MongoDB with Your Node Application.
Your first step in modifying the file will be redefining the constants that include sensitive information. Currently, these constants look like this:
...
const MONGO_USERNAME = 'sammy';
const MONGO_PASSWORD = 'your_password';
const MONGO_HOSTNAME = '127.0.0.1';
const MONGO_PORT = '27017';
const MONGO_DB = 'sharkinfo';
...
Instead of hardcoding this information, you can use the process.env
object to capture the runtime values for these constants. Modify the block to look like this:
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
...
Save and close the file when you are finished editing.
At this point, you have modified db.js
to work with your application’s environment variables, but you still need a way to pass these variables to your application. Create an .env
file with values that you can pass to your application at runtime.
Open the file:
- nano .env
This file will include the information that you removed from db.js
: the username and password for your application’s database, as well as the port setting and database name. Remember to update the username, password, and database name listed here with your own information:
MONGO_USERNAME=sammy
MONGO_PASSWORD=your_password
MONGO_PORT=27017
MONGO_DB=sharkinfo
Note that you have removed the host setting that originally appeared in db.js
. You will now define your host at the level of the Docker Compose file, along with other information about your services and containers.
Save and close this file when you are finished editing.
Because your .env
file contains sensitive information, you will want to ensure that it is included in your project’s .dockerignore
and .gitignore
files so that it does not copy to your version control or containers.
Open your .dockerignore
file:
- nano .dockerignore
Add the following line to the bottom of the file:
...
.gitignore
.env
Save and close the file when you are finished editing.
The .gitignore
file in this repository already includes .env
, but feel free to check that it is there:
- nano .gitignore
...
.env
...
At this point, you have successfully extracted sensitive information from your project code and taken measures to control how and where this information gets copied. Now you can add to your database connection code to optimize it for a containerized workflow.
Your next step will be to make your database connection method more robust by adding code that handles cases where your application fails to connect to your database. Introducing this level of resilience to your application code is a recommended practice when working with containers using Compose.
Open db.js
for editing:
- nano db.js
Notice the code that added earlier, along with the url
constant for Mongo’s connection URI and the Mongoose connect
method:
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
mongoose.connect(url, {useNewUrlParser: true});
Currently, your connect
method accepts an option that tells Mongoose to use Mongo’s new URL parser. You can add options to this method to define parameters for reconnection attempts. Do this by creating an options
constant that includes the relevant information, in addition to the new URL parser option. Below your Mongo constants, add the following definition for the options
constant:
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const options = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 500,
connectTimeoutMS: 10000,
};
...
The reconnectTries
option tells Mongoose to continue trying to connect indefinitely, while reconnectInterval
defines the period between connection attempts in milliseconds. connectTimeoutMS
defines 10 seconds as the period that the Mongo driver will wait before failing the connection attempt.
You can now use the new options
constant in the Mongoose connect
method to fine tune your Mongoose connection settings. You will also add a promise to handle potential connection errors.
Currently, the Mongoose connect
method looks like this:
...
mongoose.connect(url, {useNewUrlParser: true});
Delete the existing connect
method and replace it with the following code, which includes the options
constant and a promise:
...
mongoose.connect(url, options).then( function() {
console.log('MongoDB is connected');
})
.catch( function(err) {
console.log(err);
});
In the case of a successful connection, your function logs an appropriate message; otherwise it will catch
and log the error, allowing you to troubleshoot.
The finished file will look like this:
const mongoose = require('mongoose');
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const options = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 500,
connectTimeoutMS: 10000,
};
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
mongoose.connect(url, options).then( function() {
console.log('MongoDB is connected');
})
.catch( function(err) {
console.log(err);
});
Save and close the file when you have finished editing.
You have now added resiliency to your application code to handle cases where your application might fail to connect to your database. With this code in place, you can move on to defining your services with Compose.
With your code refactored, you are ready to write the docker-compose.yml
file with your service definitions. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml
file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.
Before defining your services, you will add a tool to your project called wait-for
to ensure that your application only attempts to connect to your database once the database startup tasks are complete. This wrapper script uses netcat
to poll whether a specific host and port are accepting TCP connections. Using it allows you to control your application’s attempts to connect to your database by testing whether the database is ready to accept connections.
Though Compose allows you to specify dependencies between services using the depends_on
option, this order is based on whether the container is running rather than its readiness. Using depends_on
won’t be optimal for your setup since you want your application to connect only when the database startup tasks, including adding a user and password to the admin
authentication database, are complete. For more information on using wait-for
and other tools to control startup order, please see the relevant recommendations in the Compose documentation.
Open a file called wait-for.sh
:
- nano wait-for.sh
Enter the following code into the file to create the polling function:
#!/bin/sh
# original script: https://github.com/eficode/wait-for/blob/master/wait-for
TIMEOUT=15
QUIET=0
echoerr() {
if [ "$QUIET" -ne 1 ]; then printf "%s\n" "$*" 1>&2; fi
}
usage() {
exitcode="$1"
cat << USAGE >&2
Usage:
$cmdname host:port [-t timeout] [-- command args]
-q | --quiet Do not output any status messages
-t TIMEOUT | --timeout=timeout Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
exit "$exitcode"
}
wait_for() {
for i in `seq $TIMEOUT` ; do
nc -z "$HOST" "$PORT" > /dev/null 2>&1
result=$?
if [ $result -eq 0 ] ; then
if [ $# -gt 0 ] ; then
exec "$@"
fi
exit 0
fi
sleep 1
done
echo "Operation timed out" >&2
exit 1
}
while [ $# -gt 0 ]
do
case "$1" in
*:* )
HOST=$(printf "%s\n" "$1"| cut -d : -f 1)
PORT=$(printf "%s\n" "$1"| cut -d : -f 2)
shift 1
;;
-q | --quiet)
QUIET=1
shift 1
;;
-t)
TIMEOUT="$2"
if [ "$TIMEOUT" = "" ]; then break; fi
shift 2
;;
--timeout=*)
TIMEOUT="${1#*=}"
shift 1
;;
--)
shift
break
;;
--help)
usage 0
;;
*)
echoerr "Unknown argument: $1"
usage 1
;;
esac
done
if [ "$HOST" = "" -o "$PORT" = "" ]; then
echoerr "Error: you need to provide a host and port to test."
usage 2
fi
wait_for "$@"
Save and close the file when you are finished adding the code.
Make the script executable:
- chmod +x wait-for.sh
Next, open the docker-compose.yml
file:
- nano docker-compose.yml
First, define the nodejs
application service by adding the following code to the file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "80:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
The nodejs
service definition includes the following options:
build
: This defines the configuration options, including the context
and dockerfile
, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image
instruction instead, with information about your username, repository, and image tag.context
: This defines the build context for the image build — in this case, the current project directory.dockerfile
: This specifies the Dockerfile
in your current project directory as the file Compose will use to build the application image. For more information about this file, please see How To Build a Node.js Application with Docker.image
, container_name
: These apply names to the image and container.restart
: This defines the restart policy. The default is no
, but you have set the container to restart unless it is stopped.env_file
: This tells Compose that you would like to add environment variables from a file called .env
, located in your build context.environment
: Using this option allows you to add the Mongo connection settings you defined in the .env
file. Note that you are not setting NODE_ENV
to development
, since this is Express’s default behavior if NODE_ENV
is not set. When moving to production, you can set this to production
to enable view caching and less verbose error messages.
Also note that you have specified the db
database container as the host, as discussed in Step 2.ports
: This maps port 80
on the host to port 8080
on the container.volumes
: You are including two types of mounts here:
/home/node/app
directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.node_modules
. When Docker runs the npm install
instruction listed in the application Dockerfile
, npm
will create a new node_modules
directory on the container that includes the packages required to run the application. The bind mount you just created will hide this newly created node_modules
directory, however. Since node_modules
on the host is empty, the bind will map an empty directory to the container, overriding the new node_modules
directory and preventing your application from starting. The named node_modules
volume solves this problem by persisting the contents of the /home/node/app/node_modules
directory and mounting it to the container, hiding the bind.Keep the following points in mind when using this approach:
Your bind will mount the contents of the node_modules
directory on the container to the host and this directory will be owned by root
, since the named volume was created by Docker.
If you have a pre-existing node_modules
directory on the host, it will override the node_modules
directory created on the container. The setup that you’re building in this tutorial assumes that you do not have a pre-existing node_modules
directory and that you won’t be working with npm
on your host. This is in keeping with a twelve-factor approach to application development, which minimizes dependencies between execution environments.
networks
: This specifies that your application service will join the app-network
network, which you will define at the bottom of the file.command
: This option lets you set the command that should be executed when Compose runs the image. Note that this will override the CMD
instruction that you set in our application Dockerfile
. Here, you are running the application using the wait-for
script, which will poll the db
service on port 27017
to test whether the database service is ready. Once the readiness test succeeds, the script will execute the command you have set, /home/node/app/node_modules/.bin/nodemon app.js
, to start the application with nodemon
. This will ensure that any future changes you make to your code are reloaded without your having to restart the application.Next, create the db
service by adding the following code below the application service definition:
...
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
Some of the settings defined for the nodejs
service remain the same, but you’ve also made the following changes to the image
, environment
, and volumes
definitions:
image
: To create this service, Compose will pull the 4.1.8-xenial
Mongo image from Docker Hub. You are pinning a particular version to avoid possible future conflicts as the Mongo image changes. For more information about version pinning, please see the Docker documentation on Dockerfile best practices.MONGO_INITDB_ROOT_USERNAME
, MONGO_INITDB_ROOT_PASSWORD
: The mongo
image makes these environment variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME
and MONGO_INITDB_ROOT_PASSWORD
together create a root
user in the admin
authentication database and ensure that authentication is enabled when the container starts. You have set MONGO_INITDB_ROOT_USERNAME
and MONGO_INITDB_ROOT_PASSWORD
using the values from your .env
file, which you pass to the db
service using the env_file
option. Doing this means that your sammy
application user will be a root
user on the database instance, with access to all the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.Note: Keep in mind that these variables will not take effect if you start the container with an existing data directory in place.
dbdata:/data/db
: The named volume dbdata
will persist the data stored in Mongo’s default data directory, /data/db
. This will ensure that you don’t lose data in cases where you stop or remove containers.The db
service was also added to the app-network
network with the networks
option.
As a final step, add the volume and network definitions to the bottom of the file:
...
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
The user-defined bridge network app-network
enables communication between your containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, your db
and nodejs
containers can communicate with each other, and you only need to expose port 80
for front-end access to the application.
Your top-level volumes
key defines the volumes dbdata
and node_modules
. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/
, that’s managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/
and get mounted to any container that uses the volume. In this way, the shark information data that your users will create will persist in the dbdata
volume even if you remove and recreate the db
container.
The finished docker-compose.yml
file will look like this:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "80:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Save and close the file when you are finished editing.
With your service definitions in place, you are ready to start the application.
With your docker-compose.yml
file in place, you can create your services with the docker-compose up
command. You can also test that your data will persist by stopping and removing your containers with docker-compose down
.
First, build the container images and create the services by running docker-compose up
with the -d
flag, which will then run the nodejs
and db
containers in the background:
- docker-compose up -d
The output confirms that your services have been created:
Output...
Creating db ... done
Creating nodejs ... done
You can also get more detailed information about the startup processes by displaying the log output from the services:
- docker-compose logs
If everything has started correctly, the following is the output:
Output...
nodejs | [nodemon] starting `node app.js`
nodejs | Example app listening on 8080!
nodejs | MongoDB is connected
...
db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin
You can also check the status of your containers with docker-compose ps
:
- docker-compose ps
The output indicates that your containers are running:
Output Name Command State Ports
----------------------------------------------------------------------
db docker-entrypoint.sh mongod Up 27017/tcp
nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp
With your services running, you can visit http://your_server_ip
in the browser:
Click on the Get Shark Info button to enter a page with an entry form where you can submit a shark name and a description of that shark’s general character:
In the form, add a shark of your choosing. For the purpose of this demonstration, add Megalodon Shark
to the Shark Name field, and Ancient
to the Shark Character field:
Click on the Submit button and a page with this shark information will be displayed back to you:
As a final step, test that the data you’ve just entered will persist if you remove your database container.
Back at your terminal, type the following command to stop and remove your containers and network:
- docker-compose down
Note that you are not including the --volumes
option; hence, your dbdata
volume is not removed.
The following output confirms that your containers and network have been removed:
OutputStopping nodejs ... done
Stopping db ... done
Removing nodejs ... done
Removing db ... done
Removing network node_project_app-network
Recreate the containers:
- docker-compose up -d
Now head back to the shark information form:
Enter a new shark of your choosing. This example will use Whale Shark
and Large
:
Once you click Submit, you notice that the new shark has been added to the shark collection in your database without the loss of the data you’ve already entered:
Your application is now running on Docker containers with data persistence and code synchronization enabled.
By following this tutorial, you have created a development setup for your Node application using Docker containers. You’ve made your project more modular and portable by extracting sensitive information and decoupling your application’s state from your application code. You have also configured a boilerplate docker-compose.yml
file that you can revise as your development needs and requirements change.
As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics.
To learn more about the code used in this tutorial, please see How To Build a Node.js Application with Docker and How To Integrate MongoDB with Your Node Application. For information about deploying a Node application with an Nginx reverse proxy using containers, please see How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose.
]]>SQLite is a popular open-source SQL database engine for storing data. It is serverless, meaning it does not need a server to run; instead it reads and writes data to a file that resides on the computer disk. Furthermore, SQLite doesn’t require any configurations; this makes it more portable and a popular choice for embedded systems, desktop/mobile apps, and prototyping, among other things.
To use SQLite with Node.js, you need a database client that connects to an SQLite database and sends SQL statements from your application to the database for execution. One of the popular choices is the node-sqlite3
package that provides asynchronous bindings for SQLite 3.
In this tutorial, you’ll use node-sqlite3
to create a connection with an SQLite database. Next, you’ll create a Node.js app that creates a table and inserts data into the database. Finally, you’ll modify the app to use node-sqlite3
to retrieve, update, and delete data from the database.
To follow this tutorial, you will need:
A Node.js development environment set up on your system. If you are using Ubuntu 22.04, install the latest version of Node.js by following Option 3 of our tutorial How To Install Node.js on Ubuntu 22.04. For other systems, consult our tutorial series How to Install Node.js and Create a Local Development Environment.
SQLite3 installed on your development environment. Follow Step 1 of our tutorial How To Install and Use SQLite on Ubuntu 20.04.
Basic knowledge of how to create tables and write SQL queries to retrieve and modify data in a table. Follow Steps 2 through 6 of our tutorial How To Install and Use SQLite on Ubuntu 20.04.
Familiarity with how to write a Node.js program, which you can find in our tutorial How To Write and Run Your First Program in Node.js.
In this step, you’ll create the project directory and download node-sqlite3
as a dependency.
To begin, create a directory using the mkdir
command. It is called sqlite_demo
for the sake of this tutorial, but you can replace the name with one of your choosing:
- mkdir sqlite_demo
Next, change into the newly created directory using the cd
command:
- cd sqlite_demo
Initialize the project directory as an npm package using the npm
command:
- npm init -y
The command creates a package.json
file, which holds important metadata for your project. The -y
option instructs npm
to accept all defaults.
After running the command, the following output will display on your screen:
OutputWrote to /home/sammy/sqlite_demo/package.json:
{
"name": "sqlite_demo",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
The output indicates that the package.json
file has been created, which contains properties that record important metadata for your project. Some of the important options are:
name
: the name of your project.version
: your project version.main
: the starting point for your project.You can leave the default options as they are, but feel free to modify the property values to suit your preference. For more information about the properties, consult npm’s package.json documentation.
Next, install the node-sqlite3
package with npm install
:
- npm install sqlite3
Upon installing the package, the output will display as follows:
Outputadded 104 packages, and audited 105 packages in 9s
5 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Now that you’ve installed node-sqlite3
, you’ll use it to connect to an SQLite database in the next section.
In this step, you will use node-sqlite3
to connect your Node.js program to an SQLite database that you will create, which contains different sharks and their attributes. To establish a database connection, the node-sqlite3
package provides a Database
class. When instantiated, the class creates an SQLite database file on your computer disk and connects to it. Once the connection is established, you’ll create a table for your application, which in later sections you will use to insert, retrieve, or update data.
Using nano
, or your favorite text editor, create and open the db.js
file:
- nano db.js
In your db.js
file, add the following code to establish a connection with the SQLite database:
const sqlite3 = require("sqlite3").verbose();
const filepath = "./fish.db";
function createDbConnection() {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
});
console.log("Connection with SQLite has been established");
return db;
}
In the first line, you import the node-sqlite3
module into your program file. In the second line, you set the variable filepath
with the path you want your SQLite database to reside in and the name of the database file, which in this case is fish.db
.
In the next line, you define the createDbConnection()
function that establishes a connection with the SQLite database. Within the function, you instantiate the sqlite3.Database()
class with the new
keyword. The class takes two arguments: filepath
and a callback.
The first argument, filepath
, accepts the name and path to the SQLite database, which is the ./fish.db
here. The second argument is a callback that runs once the database has been created and a connection to the database has been established. The callback takes an error
parameter that is set to an error
object if an error occurs when trying to establish a database connection. Within the callback, you use the if
statement to check if there is an error. If the condition is true, you use the console.error()
method to log the error message.
Now, when you create an instance using the sqlite3.Database()
class, it creates an SQLite database file in your project directory and returns a database object that is stored in the db
variable. The database object provides methods that you can use to pass SQL statements that create tables and insert, retrieve, or modify data.
Finally, you invoke console.log()
to log a success message and return the database object in the db
variable.
Next, add the highlighted code to create a function that creates a table:
...
function createDbConnection() {
...
}
function createTable(db) {
db.exec(`
CREATE TABLE sharks
(
ID INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(50) NOT NULL,
color VARCHAR(50) NOT NULL,
weight INTEGER NOT NULL
);
`);
}
The createTable()
function creates a table in the SQLite database. It takes a database object db
as a parameter. Within the createTable()
function, you invoke the exec()
method of the db
database object that sends the given SQL statement to the database to be executed. The exec()
method is only used for queries that do not return result rows.
The CREATE TABLE sharks...
SQL statement passed to the exec()
method creates a table sharks
with the following fields:
ID
: stores values of INTEGER
datatype. The PRIMARY KEY
constraint designates the column as the primary key and AUTOINCREMENT
instructs SQLite to automatically increment the ID
column values for each row in the table.name
: details the name of the shark using the VARCHAR
datatype that has a maximum of 50 characters. The NOT NULL
constraint ensures that the field cannot store NULL values.color
: represents the color of the shark using the VARCHAR
datatype with a maximum of 50 characters. The NOT NULL
constraint signifies that the field should not accept NULL values.weight
: stores the weight of the shark in kilograms using the INTEGER
datatype, and uses the NOT NULL
constraint to ensure that NULL values are not allowed.In the same db.js
file, add the highlighted code to invoke the createTable()
function:
function createDbConnection() {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
createTable(db);
});
console.log("Connection with SQLite has been established");
return db;
}
function createTable(db) {
...
}
When the callback runs, you call the createTable()
function with the db
database object as an argument.
Next, add the following line to call the createDbConnection()
function:
...
function createDbConnection() {
...
}
function createTable(db) {
...
}
module.exports = createDbConnection();
In the preceding code, you call the createDbConnection()
function, which establishes the connection to the database and returns a database object. You then use module.exports
to export the database object so that you can reference it in other files.
Your file will now contain the following:
const sqlite3 = require("sqlite3").verbose();
const filepath = "./fish.db";
function createDbConnection() {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
createTable(db);
});
console.log("Connection with SQLite has been established");
return db;
}
function createTable(db) {
db.exec(`
CREATE TABLE sharks
(
ID INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(50) NOT NULL,
color VARCHAR(50) NOT NULL,
weight INTEGER NOT NULL
);
`);
}
module.exports = createDbConnection();
Save and exit your file. If using nano
, press CTRL+X
to exit, press y
to save the changes you made, and press ENTER
to confirm the filename.
Run the db.js
file using the node
command:
- node db.js
The output will reveal that the database connection has been established successfully:
OutputConnection with SQLite has been established
Next, check if the fish.db
database file has been created using the ls
command:
- ls
Outputdb.js fish.db node_modules package-lock.json package.json
The appearance of the fish.db
database file in the output confirms that the database was created successfully.
Now, each time you run the db.js
file, it will call the createTable()
function to create a table in the database. Attempting to create a table that already exists triggers SQLite to throw an error. To see this, rerun the db.js
file with the node
command:
- node db.js
This time, you will get an error as revealed in the following output:
OutputConnection with SQLite has been established
undefined:0
[Error: SQLITE_ERROR: table sharks already exists
Emitted 'error' event on Database instance at:
] {
errno: 1,
code: 'SQLITE_ERROR'
}
Node.js v17.6.0
The error message indicates that the sharks
table already exists. This is because when you run the node
command for the first time, the fish
database was created, as well as the sharks
table. When you rerun the command, the createTable()
function is run again for the second time, which triggers an error since the table already exists.
This error would also be triggered anytime you want to use the database object methods to manipulate the database in other files. For example, in the next step, you’ll create a file that inserts data into the database. To use the database object, you will import the db.js
file and call the relevant method for inserting data into the database. When you run the file, it will in turn run the db.js
, which will trigger the same error.
To remedy this, you will use the existsSync()
method of the fs
module to check for the existence of the database file fish.db
in the project directory. If the database file exists, you will establish the connection to the database without calling the createTable()
function. If it does not exist, you will establish the connection and call the createTable()
function.
To do this, open the db.js
in your editor once more:
- nano db.js
In your db.js
file, add the highlighted code to check for the existence of the database file:
const fs = require("fs");
const sqlite3 = require("sqlite3").verbose();
const filepath = "./fish.db";
function createDbConnection() {
if (fs.existsSync(filepath)) {
return new sqlite3.Database(filepath);
} else {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
createTable(db);
});
console.log("Connection with SQLite has been established");
return db;
}
}
First, you import the fs
module used for interacting with the file system. Second, in the added if
statement, you invoke the fs.existSync()
method to check for the existence of the file in the given argument, which is the database file ./fish.db
here. If the file exists, you call sqlite3.Database()
with the database file path and omit the callback. However, if the file does not exist, you create the database instance and invoke the createTable()
function in the callback to create the table in the database.
At this point, the complete file will now display as follows:
const fs = require("fs");
const sqlite3 = require("sqlite3").verbose();
const filepath = "./fish.db";
function createDbConnection() {
if (fs.existsSync(filepath)) {
return new sqlite3.Database(filepath);
} else {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
createTable(db);
});
console.log("Connection with SQLite has been established");
return db;
}
}
function createTable(db) {
db.exec(`
CREATE TABLE sharks
(
ID INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(50) NOT NULL,
color VARCHAR(50) NOT NULL,
weight INTEGER NOT NULL
);
`);
}
module.exports = createDbConnection();
Save and close your file once you are done making the changes.
To make sure that the db.js
file doesn’t throw an error when run multiple times, delete the fish.db
file with the rm
command to start afresh:
- rm fish.db
Run the db.js
file:
- node db.js
OutputConnection with SQLite has been established
Now, confirm that db.js
connects to the database and doesn’t attempt to create the table again for all the subsequent reruns of the db.js
file by running the file again:
- node db.js
You will now notice that you won’t get the error anymore.
Now that you established the connection to the SQLite database and created a table, you’ll insert data into the database.
In this step, you will create a function that inserts data in the SQLite database using the node-sqlite3
module. You’ll pass the data you want to insert as command-line arguments to the program.
Create and open the insertData.js
file in your text editor:
- nano insertData.js
In your insertData.js
file, add the following code to get command-line arguments:
const db = require("./db");
function insertRow() {
const [name, color, weight] = process.argv.slice(2);
}
In the first line, you import the database object that you exported in the db.js
file in the previous step. In the second line, you define the function insertRow()
, which you will soon use to insert data into the table. In the function, process.argv
returns all the command-line arguments in an array. The first element on index 0
contains the path to Node. The second element on index 1
stores the JavaScript program’s filename. All the subsequent elements starting from index 2
contain the command-line arguments you passed to the file. To skip the first two arguments, you use JavaScript’s slice()
method to make a shallow copy of the array and return elements from index 2
to the end of the array.
Next, add the highlighted code to insert data into the database:
const db = require("./db");
function insertRow() {
const [name, color, weight] = process.argv.slice(2);
db.run(
`INSERT INTO sharks (name, color, weight) VALUES (?, ?, ?)`,
[name, color, weight],
function (error) {
if (error) {
console.error(error.message);
}
console.log(`Inserted a row with the ID: ${this.lastID}`);
}
);
}
In the preceding code, you call the db.run()
method, which takes three arguments: the SQL statement, an array, and a callback. The first argument, INSERT INTO sharks...
, is an SQL statement that inserts data into the database. The VALUES
statement in the INSERT
statement takes a comma-separated list of values that need to be inserted. Notice that you are passing the ?
placeholders instead of passing the values directly. This is to avoid SQL injection attacks. During execution, SQLite will automatically substitute the placeholders with the values passed in the db.run()
method second argument, which is an array containing command-line argument values.
Finally, the third argument for the db.run()
method is a callback that runs when the data has been successfully inserted into the table. If there is an error, the error message is logged in the console. If the insertion was successful, you log a success message with the ID of the newly inserted row that this.lastID
returned.
Now, add the highlighted line to call the insertRow()
function:
const db = require("./db");
function insertRow() {
const [name, color, weight] = process.argv.slice(2);
db.run(
`INSERT INTO sharks (name, color, weight) VALUES (?, ?, ?)`,
[name, color, weight],
function (error) {
if (error) {
console.error(error.message);
}
console.log(`Inserted a row with the ID: ${this.lastID}`);
}
);
}
insertRow();
Save and close your file, then run the file with the shark name, color, and weight arguments:
- node insertData.js sammy blue 1900
The output indicates the row has been inserted into the table with the primary ID 1
:
OutputInserted a row with the ID: 1
Run the command again with different arguments:
- node insertData.js max white 2100
OutputInserted a row with the ID: 2
When you run the preceding commands, two rows will be created in the sharks
table.
Now that you can insert data into the SQLite database, next you’ll retrieve the data from the database.
In this step, you’ll use the node-sqlite3
module to retrieve all the data stored in the sharks
table in the SQLite database and log them into the console.
First, open the listData.js
file:
- nano listData.js
In your listData.js
file, add the following code to retrieve all rows:
const db = require("./db");
function selectRows() {
db.each(`SELECT * FROM sharks`, (error, row) => {
if (error) {
throw new Error(error.message);
}
console.log(row);
});
}
First, you import the database object in the db.js
file. Second, you define the selectRows()
function that retrieves all the rows in the SQLite database. Within the function, you use the each()
method of the database object db
to retrieve rows from the database one by one. The each()
method takes two arguments: an SQL statement and a callback.
The first argument SELECT
returns all rows in the sharks
table. The second argument is a callback that runs each time a row is retrieved from the database. In the callback, you check for an error. If there is an error, you use the throw
statement to create a custom error. If no error occurred during retrieval, the data is logged in the console.
Now, add the highlighted code to call the selectRows()
function:
const db = require("./db");
function selectRows() {
db.each(`SELECT * FROM sharks`, (error, row) => {
if (error) {
throw new Error(error.message);
}
console.log(row);
});
}
selectRows();
Save and exit your file, then run the file:
- node listData.js
Upon running the command, you’ll notice output that displays as follows:
Output{ ID: 1, name: 'sammy', color: 'blue', weight: 1900 }
{ ID: 2, name: 'max', color: 'white', weight: 2100 }
The output displays all the rows you inserted in the sharks
table in the previous step. The node-sqlite3
module converts each SQL result into a JavaScript object.
Now that you can retrieve data from the SQLite database, you’ll update the data in the SQLite database.
In this step, you’ll use the node-sqlite3
module to update a row in the SQLite database. To do that, you’ll pass the program a command-line argument containing the primary ID of the row you want to modify, as well as the value you want to update the row to.
Create and open the updateData.js
file in your text editor:
- nano updateData.js
In your updateData.js
file, add the following code to update a record:
const db = require("./db");
function updateRow() {
const [id, name] = process.argv.slice(2);
db.run(
`UPDATE sharks SET name = ? WHERE id = ?`,
[name, id],
function (error) {
if (error) {
console.error(error.message);
}
console.log(`Row ${id} has been updated`);
}
);
}
updateRow();
First, you import the database object from the db.js
file. Second, you define the updateRow
function that updates a row in the database. Within the function, you unpack the command-line arguments into the id
and name
variables. The id
variable contains the primary ID of the row you want to update, and name
contains the value you want the name field to reflect.
Next, you invoke the db.run()
function with the following arguments: an SQL statement and a callback. The UPDATE
SQL statement changes the name
column from the current value to the value passed in the name
variable. The WHERE
clause ensures that only the row with the ID in the id
variable is updated. The db.run()
method takes a second argument, which is a callback that runs once the value has been updated.
Finally, you call the updateRow()
function.
Save and close your file when you are finished making the changes.
Run the updateData.js
file with the id
of the row you want to change and the new name
:
- node updateData.js 2 sonny
OutputRow 2 has been updated
Verify that the name has been changed:
- node listData.js
When you run the command, your output will resemble the following:
Output{ ID: 1, name: 'sammy', color: 'blue', weight: 1900 }
{ ID: 2, name: 'sonny', color: 'white', weight: 2100 }
The output indicates that the row with the ID 2
now has sonny
as the value for its name
field.
With that, you can now update a row in the database. Next, you’ll delete data from the SQLite database.
In this section, you’ll use node-sqlite3
to select and delete a row from a table in the SQLite database.
Create and open the deleteData.js
file in your text editor:
- nano deleteData.js
In your deleteData.js
file, add the following code to delete a row in the database:
const db = require("./db");
async function deleteRow() {
const [id] = process.argv.slice(2);
db.run(`DELETE FROM sharks WHERE id = ?`, [id], function (error) {
if (error) {
return console.error(error.message);
}
console.log(`Row with the ID ${id} has been deleted`);
});
}
deleteRow();
First, you import the database object in the db.js
file. Second, you define the deleteRow()
that deletes a row in the table sharks
. Within the function, you unpack the primary key ID and store it in the id
variable. Next, you invoke db.run()
, which takes two arguments. The first argument is an SQL statement DELETE from sharks...
that deletes a row in the table sharks
. The WHERE
clause ensures that only the row with the ID in the id
variable is deleted. The second argument is a callback that runs once the row has been deleted. If successful, the function logs a success message; otherwise, it logs the error in the console.
Finally, you call the deleteRow()
function.
Save and close your file, then run the following command:
- node deleteData.js 2
OutputRow with the ID 2 has been deleted
Next, confirm that the row has been deleted:
- node listData.js
When you run the command, your output will display similar to the following:
Output{ ID: 1, name: 'sammy', color: 'blue', weight: 1900 }
The row with the ID 2
is no longer in the results. This confirms that the row has been deleted.
With that, you can now delete rows in the SQLite database using the node-sqlite3
module.
In this article, you created a Node.js app that uses the node-sqlite3
module to connect to and create a table on the SQLite database. Next, you modified the app to insert, retrieve, and update data in the database. Finally, you modified the app to delete data in the database.
For more insight into node-sqlite3
methods, visit the node-sqlite3
Wiki page. To learn about SQLite, consult the SQLite documentation. If you want to know how SQLite compares with other SQL databases, consult our tutorial SQLite vs MySQL vs PostgreSQL: A Comparison Of Relational Database Management Systems. To continue your Node.js journey, visit the How To Code in Node.js series.
asdf is a command line interface tool, or CLI tool, for managing different runtime versions across multiple programming languages. It unifies all the runtimes under one configuration file, and uses a plugin structure to manage everything with one tool. As an example, you can install Node.js, but then have asdf as a central repository of plugins with each plugin being maintained either officially or by community contributors.
In this tutorial, you will install the asdf core and the Node.js plugin with build dependencies, which is the minimum required for functionality. You will then install Node.js and manage the version you want to use, depending on your desired scope.
sudo
privileges and a firewall enabled.asdf relies on the installation of a core that, alone, does not have functionality. The asdf core relies on separate plugins that are specific to a given programming language or program. Most commonly, it is used to install and manage multiple versions of a given programming language. It’s recommended that you download the asdf core with git
, which comes installed with Ubuntu 22.04. To get the latest version of asdf, clone the latest branch from the asdf repository:
- git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.10.2
asdf requires a unique installation depending on the combination of shell type and method it was downloaded. By default, Ubuntu uses Bash for its shell, which uses the ~/.bashrc
file for configuration and customization. To enable the usage of the asdf
command, you will have to add the following line:
- echo ". $HOME/.asdf/asdf.sh" >> ~/.bashrc
Next, make sure your changes are applied to your current session:
- source ~/.bashrc
Note: If you are using ZSH instead of Bash, you can add the same line but to the ~/.zshrc
file instead.
With the core installed, you can now install the plugin.
Installing the plugin for Node.js for asdf is not the same as installing Node.js by itself. That will occur in the next step. As mentioned previously, the minimum requirements for a usable asdf setup is the asdf core and at least one plugin. Once you install this plugin, you can use it to install the runtime it handles.
Every asdf plugin is maintained separately. Some are maintained by the core asdf team, but most are community maintained. Every asdf plugin has its own repository and dependencies that need to be installed. You must check each plugin repository, such as with the Node.js plugin repository. This plugin in particular is officially maintained by the asdf team.
To install the plugin, use the following asdf plugin add
command:
- asdf plugin add nodejs https://github.com/asdf-vm/asdf-nodejs.git
For this Node.js plugin, the dependencies are mentioned in the “Use” section of their “README” file. Within that section, the explicit dependencies are linked to in the official Node.js repositories’ section on building Node.js. This must be done manually because asdf is a solution targeted at multiple operating systems, with each one having their own unique dependencies and methods to install them. This can also vary from plugin to plugin. For this plugin on Ubuntu, you need to install these dependencies. Begin by updating your apt
source index:
- sudo apt update
Then, you can install the required dependencies:
- sudo apt install python3 g++ make python3-pip
For this Node.js plugin, depending on the version you need installed, it picks either pre-compiled binaries or compiles binaries from source. If you happen to choose a version that requires compiling from source, the aforementioned dependencies are required.
With the plugin successfully installed, next you can install Node.js.
You can install multiple Node.js versions, choosing from the latest or any specified versions. To install the latest version of Node.js, enter the following:
- asdf install nodejs latest
OutputTrying to update node-build... ok
Downloading node-v18.10.0-linux-x64.tar.gz...
-> https://nodejs.org/dist/v18.10.0/node-v18.10.0-linux-x64.tar.gz
Installing node-v18.10.0-linux-x64...
Installed node-v18.10.0-linux-x64 to /home/sammy/.asdf/installs/nodejs/18.10.0
Installing the latest
version is a shortcut provided by asdf, it is not a special version. asdf identifies and enforces versions by their exact numbers. To install a specific version of Node.js, enter the following:
- asdf install nodejs 16.16.0
OutputTrying to update node-build... ok
Downloading node-v16.16.0-linux-x64.tar.gz...
-> https://nodejs.org/dist/v16.16.0/node-v16.16.0-linux-x64.tar.gz
Installing node-v16.16.0-linux-x64...
Installed node-v16.16.0-linux-x64 to /home/sammy/.asdf/installs/nodejs/16.16.0
With these two versions installed, you can check all the versions you have with the following:
- asdf list nodejs
Output 16.16.0
18.10.0
Additionally, if you ever want to remove a version, you can use the uninstall
command with a specific version target:
- asdf uninstall nodejs 16.16.0
Now that Node.js is installed, you can choose the version you want active.
asdf can set the version of Node.js at three different levels: local
, global
, and shell
. If you only want to set the Node.js version for your project’s working directory, run the following:
- asdf local nodejs latest
Setting the current version at the global
level acts at the user level for your system:
- asdf global nodejs latest
If you only want to set the version for the current shell session, enter the following:
- asdf shell nodejs latest
Now you have a complete installation of Node.js using asdf, with the ability to switch to the version you need at the scope that you want.
In this tutorial you installed the asdf core, the asdf Node.js plugin, then Node.js itself. asdf enables multiple versions of a runtime to be installed, and you choose the version at different levels of scope from globally to working project directory. If you’re interested in a conventional installation of Node.js, check our this tutorial on how to install Node.js on Ubuntu 22.04.
]]>Node.js is a JavaScript runtime for server-side programming. It allows developers to create scalable backend functionality using JavaScript, a language many are already familiar with from browser-based web development.
In this guide, you will review three different ways of getting Node.js installed on a Rocky Linux 9 server:
dnf
to install the nodejs
package from Rocky’s default software repositorydnf
with the Nodesource software repository to install specific versions of the nodejs
packagenvm
, the Node Version Manager, and using it to install and manage multiple versions of Node.jsFor many users, using dnf
with the default package sources will be sufficient. If you need specific newer (or legacy) versions of Node, you should use the Nodesource repository. If you are actively developing Node applications and need to switch between node
versions frequently, choose the nvm
method.
This guide assumes that you are using Rocky Linux 9. Before you begin, you should have a non-root user account with sudo
privileges set up on your system. You can learn how to do this by following the Rocky Linux 9 initial server setup tutorial.
Rocky Linux 9 contains a version of Node.js in its default repositories that can be used to provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is 16.14.0. This will not be the latest version, but it should be stable and sufficient for quick experimentation with the language.
To get this version, you can use the dnf
package manager:
- sudo dnf install nodejs -y
Check that the install was successful by querying node
for its version number:
- node -v
Outputv16.14.0
If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. The Node.js package from Rocky’s default repositories also comes with npm
, the Node.js package manager. This will allow you to install modules and packages to use with Node.js.
At this point you have successfully installed Node.js and npm
using dnf
and the default Rocky software repositories. The next section will show you how to use an alternate repository to install different versions of Node.js.
To install a different version of Node.js, you can use the NodeSource repository. NodeSource is a third party repository that has more versions of Node.js available than the official Rocky repositories. Node.js v14, v16, and v18 are available as of the time of writing.
First, you’ll need to configure the repository locally, in order to get access to its packages. From your home directory, use curl
to retrieve the installation script for your preferred version, making sure to replace 18.x
with your preferred version string (if different).
- cd ~
- curl -sL https://rpm.nodesource.com/setup_18.x -o nodesource_setup.sh
Refer to the NodeSource documentation for more information on the available versions.
You can inspect the contents of the downloaded script with vi
(or your preferred text editor):
- vi nodesource_setup.sh
Running third party shell scripts is not always considered a best practice, but in this case, NodeSource implements their own logic in order to ensure the correct commands are being passed to your package manager based on distro and version requirements. If you are satisfied that the script is safe to run, exit your editor, then run the script with sudo
:
- sudo bash nodesource_setup.sh
Output…
## Your system appears to already have Node.js installed from an alternative source.
Run `sudo yum remove -y nodejs npm` to remove these first.
## Run `sudo yum install -y nodejs` to install Node.js 18.x and npm.
## You may run dnf if yum is not available:
sudo dnf install -y nodejs
## You may also need development tools to build native addons:
sudo yum install gcc-c++ make
## To install the Yarn package manager, run:
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | sudo tee /etc/yum.repos.d/yarn.repo
sudo yum install yarn
The repository will be added to your configuration and your local package cache will be updated automatically. You can now install the Node.js package in the same way you did in the previous section. It may be a good idea to fully remove your older Node.js packages before installing the new version, by using sudo dnf remove nodejs npm
. This will not affect your configurations at all, only the installed versions. Third party repositories don’t always package their software in a way that works as a direct upgrade over stock packages, and if you have trouble, you can always try to revert to a clean slate.
- sudo dnf remove nodejs npm -y
-
- ```command
- sudo dnf install nodejs -y
Verify that you’ve installed the new version by running node
with the -v
version flag:
- node -v
Outputv18.9.0
The NodeSource nodejs
package contains both the node
binary and npm
, so you don’t need to install npm
separately.
At this point you have successfully installed Node.js and npm
using dnf
and the NodeSource repository. The next section will show how to use the Node Version Manager to install and manage multiple versions of Node.js.
Another way of installing Node.js that is particularly flexible is to use nvm, the Node Version Manager. This piece of software allows you to install and maintain many different independent versions of Node.js, and their associated Node packages, at the same time.
To install NVM on your Rocky Linux 9 machine, visit the project’s GitHub page. Copy the curl
command from the README file that displays on the main page. This will get you the most recent version of the installation script.
Before piping the command through to bash
, it is always a good idea to audit the script to make sure it isn’t doing anything you don’t agree with. You can do that by removing the | bash
segment at the end of the curl
command:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh
Take a look and make sure you are comfortable with the changes it is making. When you are satisfied, run the command again with | bash
appended at the end. The URL you use will change depending on the latest version of nvm, but as of right now, the script can be downloaded and executed by typing:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
This will install the nvm
script to your user account. To use it, you must first source your .bashrc
file:
- source ~/.bashrc
Now, you can ask NVM which versions of Node are available:
- nvm list-remote
Output. . .
v16.13.1 (LTS: Gallium)
v16.13.2 (LTS: Gallium)
v16.14.0 (LTS: Gallium)
v16.14.1 (LTS: Gallium)
v16.14.2 (LTS: Gallium)
v16.15.0 (LTS: Gallium)
v16.15.1 (LTS: Gallium)
v16.16.0 (LTS: Gallium)
v16.17.0 (Latest LTS: Gallium)
v17.0.0
v17.0.1
v17.1.0
v17.2.0
…
It’s a very long list! You can install a version of Node by typing any of the release versions you see. For instance, to get version v16.16.0 (an LTS release), you can type:
- nvm install v16.16.0
You can see the different versions you have installed by typing:
nvm list
Output-> v16.16.0
system
default -> v16.16.0
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v16.16.0) (default)
stable -> 16.16 (-> v16.16.0) (default)
lts/* -> lts/gallium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.12 (-> N/A)
lts/fermium -> v14.20.0 (-> N/A)
lts/gallium -> v16.17.0 (-> N/A)
This shows the currently active version on the first line (-> v16.16.0
), followed by some named aliases and the versions that those aliases point to.
Note: if you also have a version of Node.js installed through dnf
, you may see a system
entry here. You can always activate the system-installed version of Node using nvm use system
.
You can install a release based on these aliases as well. For instance, to install fermium
, run the following:
- nvm install lts/gallium
OutputDownloading and installing node v16.17.0...
Downloading https://nodejs.org/dist/v16.17.0/node-v16.17.0-linux-x64.tar.xz...
################################################################################# 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v16.17.0 (npm v8.15.0)
You can verify that the install was successful using the same technique from the other sections, by typing:
- node -v
Outputv16.17.0
The correct version of Node is installed on our machine as we expected. A compatible version of npm
is also available.
There are quite a few ways to get up and running with Node.js on your Rocky Linux server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in Rocky’s repositories is the easiest method, using nvm
or the NodeSource repository offers additional flexibility.
For more information on programming with Node.js, please refer to our tutorial series How To Code in Node.js.
]]>URL, an abbreviation for Uniform Resource Locator, is an address given to a unique resource on the web. Because a URL is unique, no two resources can have the same URL.
The length and complexity of URLs vary. A URL might be as short as example.com
or as lengthy as http://llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch.co.uk
. Complex URLs can be unsightly, cause search engine optimization (SEO) issues, and negatively impact marketing plans. URL shorteners map a long URL to a shorter URL and redirect the user to the original URL when the short URL is used.
In this tutorial, you will create a URL shortener using NestJS. First, you will implement the URL shortening and redirecting logic in a service. Then, you will create route handlers to facilitate the shortening and redirection requests.
To follow this tutorial, you will need:
In this step, you will set up everything you need to start implementing your URL shortening logic. You will install NestJS globally, generate a new NestJS application boilerplate, install dependencies, and create your project’s module, service, and controller.
First, you will install the Nest CLI globally if you have not previously installed it. You will use this CLI to generate your project directory and the required files. Run the following command to install the Nest CLI:
- npm install -g @nestjs/cli
The -g
flag will install the Nest CLI globally on your system.
You will see the following output:
- Output...
- added 249 packages, and audited 250 packages in 3m
- 39 packages are looking for funding
- run npm fund for details
- found 0 vulnerabilities
Then you will use the new
command to create the project and generate the necessary boilerplate starter files:
- nest new URL-shortener
You will see the following output:
- Output...
- ⚡ We will scaffold your app in a few seconds..
-
- CREATE url-shortener/.eslintrc.js (631 bytes)
- CREATE url-shortener/.prettierrc (51 bytes)
- CREATE url-shortener/nest-cli.json (118 bytes)
- CREATE url-shortener/package.json (2002 bytes)
- CREATE url-shortener/README.md (3339 bytes)
- CREATE url-shortener/tsconfig.build.json (97 bytes)
- CREATE url-shortener/tsconfig.json (546 bytes)
- CREATE url-shortener/src/app.controller.spec.ts (617 bytes)
- CREATE url-shortener/src/app.controller.ts (274 bytes)
- CREATE url-shortener/src/app.module.ts (249 bytes)
- CREATE url-shortener/src/app.service.ts (142 bytes)
- CREATE url-shortener/src/main.ts (208 bytes)
- CREATE url-shortener/test/app.e2e-spec.ts (630 bytes)
- CREATE url-shortener/test/jest-e2e.json (183 bytes)
-
- ? Which package manager would you ❤️ to use? (Use arrow keys)
- > npm
- yarn
- pnpm
Choose npm
.
You’ll see the following output:
- Output√ Installation in progress... ☕
-
- 🚀 Successfully created project url-shortener
- 👉 Get started with the following commands:
-
- $ cd url-shortener
- $ npm run start
-
- Thanks for installing Nest 🙏
- Please consider donating to our open collective
- to help us maintain this package.
-
- 🍷 Donate: https://opencollective.com/nest
Move to your created project directory:
- cd url-shortener
You will run all subsequent commands in this directory.
Note: The NestJS CLI creates app.controller.ts
, app.controller.spec.ts
, and app.service.ts
files when you generate a new project. Because you won’t need them in this tutorial, you can either delete or ignore them.
Next, you will install the required dependencies.
This tutorial requires a few dependencies, which you will install using NodeJS’s default package manager npm. The required dependencies include TypeORM, SQLite, Class-validator, Class-transformer, and Nano-ID.
TypeORM is an object-relational mapper that facilitates interactions between a TypeScript application and a relational database. This ORM works seamlessly with NestJS due to NestJS’s dedicated @nestjs/typeorm
package. You will use this dependency with NestJS’s native typeorm
package to interact with an SQLite database.
Run the following command to install TypeORM and its dedicated NestJS package:
- npm install @nestjs/typeorm typeorm
SQLite is a library that implements a small, fast, self-contained SQL database engine. You will use this dependency as your database to store and retrieve shortened URLs.
Run the following command to install SQLite:
- npm install sqlite3
The class-validator
package contains decorators used for data validation in NestJS. You will use this dependency with your data-transfer object to validate the data sent into your application.
Run the following command to install class-validator
:
- npm install class-validator
The class-transformer
package allows you to transform plain objects into an instance of a class and vice-versa. You will use this dependency with the class-validator
as it cannot work alone.
Run the following command to install class-transformer
:
- npm install class-transformer
Nano-ID is a secure, URL-friendly unique string ID generator. You will use this dependency to generate a unique id for each URL resource.
Run the following command to install Nano-ID:
- npm install nanoid@^3.0.0
Note: Versions of Nano-ID higher than 3.0.0
disabled support for CommonJS modules. This issue could cause an error in your application because the JavaScript code produced by the TypeScript compiler still uses the CommonJS module system.
After you have installed the required dependencies, you will generate the project’s module, service, and controller using the Nest CLI. The module will organize your project, the service will handle all the logic for the URL shortener, and the controller will handle the routes.
Run the following command to generate your module:
- nest generate module url
You will see the following output:
OutputCREATE src/url/url.module.ts (80 bytes)
UPDATE src/app.module.ts (304 bytes)
Next, run the following command to generate your service:
- nest generate service url --no-spec
The --no-spec
flag tells the Nest CLI to generate the files without their test files. You won’t need the test files in this tutorial.
You will see the following output:
OutputCREATE src/url/url.service.ts (87 bytes)
UPDATE src/url/url.module.ts (151 bytes)
Then run the following command to generate your controller:
- nest generate controller url --no-spec
You will see the following output:
OutputCREATE src/url/url.controller.ts (95 bytes)
UPDATE src/url/url.module.ts (233 bytes)
In this step, you generated your application and most of the files needed for your development. Next, you’ll connect your application to a database.
In this step, you’ll create an entity to model the URL resource in your database. An entity is a file containing the necessary properties of the stored data. You will also create a repository as an access layer between your application and its database.
Using nano
or your preferred text editor, create and open a file in the src/url
folder called url.entity.ts
:
- nano src/url/url.entity.ts
This file will contain the entity to model your data.
Next, in your src/url/url.entity.ts
file, add the following Typescript code:
import { Entity, Column, PrimaryGeneratedColumn } from 'typeorm';
@Entity()
export class Url {
@PrimaryGeneratedColumn()
id: number;
@Column()
urlCode: string;
@Column()
longUrl: string;
@Column()
shortUrl: string;
}
First, you import the Entity
, Column
, and PrimaryGeneratedColumn
decorators from 'typeorm'
.
The code creates and exports a class Url
annotated with the Entity
decorator that marks a class as an entity.
Each property is specified and annotated with the appropriate decorators: PrimaryGeneratedColumn
for the id and Column
for the rest of the properties. PrimaryGeneratedColumn
is a decorator that automatically generates a value for the properties it annotates. TypeOrm will use it to generate an id for each resource. Column
is a decorator that adds a property it annotates as a column in a database.
The properties that should be stored in the database include the following:
id
is the primary key for the database table.urlCode
is the unique id generated by the nanoid
package and will be used to identify each URL.longUrl
is the URL sent to your application to be shortened.shortUrl
is the shortened URL.Save and close the file.
Next, you’ll create a connection between your application and your database.
First, open src/app.module.ts
in nano
or your preferred text editor:
- nano src/app.module.ts
Then, add the highlighted lines to the file:
import { Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { Url } from './url/url.entity';
import { UrlModule } from './url/url.module';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'sqlite',
database: 'URL.sqlite',
entities: [Url],
synchronize: true,
}),
UrlModule,
],
controllers: [],
providers: [],
})
export class AppModule {}
First, import TypeOrmModule
from @nestjs/typeorm
and Url
from ./url/url.entity
. You may still have lines related to AppController
and AppService
. Leaving them in the file will not affect the rest of the tutorial.
In the imports array, call the forRoot
method on the TypeOrmModule
to share the connection through all the modules in your application. The forRoot
method takes a configuration object as an argument.
The configuration object contains properties that create the connection. These properties include the following:
type
property denotes the kind of database you are using TypeOrm to interact with. In this case, it is set to 'sqlite'
.database
property denotes the preferred name for your database. In this case, it is set to 'URL.sqlite'
.entities
property is an array of all the entities in your project. In this case, you have just one entity specified as Url
inside the array.synchronize
option automatically syncs your database tables with your entity and updates the tables each time you run the code. In this case, it is set to true
.Note: Setting synchronize
to true
is only ideal in a development environment. It should always be set to false
in production, as it could cause data loss.
Save and close the file.
Next, you’ll create a repository to act as an access layer between your application and your database. You’ll need to connect your entity to its parent module, and this connection enables Nest and TypeOrm to create a repository automatically.
Open src/url/url.module.ts
:
- nano src/url/url.module.ts
In the existing file, add the highlighted lines:
import { Module } from '@nestjs/common';
import { UrlService } from './url.service';
import { UrlController } from './url.controller';
import { TypeOrmModule } from '@nestjs/typeorm';
import { Url } from './url.entity';
@Module({
imports: [TypeOrmModule.forFeature([Url])],
providers: [UrlService],
controllers: [UrlController],
})
export class UrlModule {}
You create an imports
array inside the Module
decorator where you import TypeOrmModule
from @nestjs/typeorm
and Url
from ./url.entity
. Inside the imports
array, you call the forFeature
method on the TypeOrmModule
. The forFeature
method takes an array of entities as an argument, so you pass in the Url
entity.
Save and close the file.
Nest and TypeOrm will create a repository behind the scenes that will act as an access layer between your service and the database.
In this step, you connected your application to a database. You are now ready to implement the URL shortening logic.
In this step, you will implement your service logic with two methods. The first method, shortenUrl
, will contain all the URL shortening logic. The second method, redirect
, will contain all the logic to redirect a user to the original URL. You will also create a data-transfer object to validate the data coming into your application.
Before implementing these methods, you will give your service access to your repository to enable your application to read and write data in the database.
First, open src/url/url.service.ts
:
- nano src/url/url.service.ts
Add the following highlighted lines to the existing file:
import { Injectable } from '@nestjs/common';
import { Repository } from 'typeorm';
import { InjectRepository } from '@nestjs/typeorm';
import { Url } from './url.entity';
@Injectable()
export class UrlService {
constructor(
@InjectRepository(Url)
private repo: Repository<Url>,
) {}
}
You import Repository
from typeorm
, InjectRepository
from @nestjs/typeorm
, and Url
from ./url.entity
.
In your UrlService
class, you create a constructor
. Inside the constructor
, you declare a private variable, repo
, as a parameter. Then, you assign a type of Repository
to repo
with a generic type of Url
. You annotate the repo
variable with the InjectRepository
decorator and pass Url
as an argument.
Save and close the file.
Your service now has access to your repository through the repo
variable. All the database queries and TypeOrm methods will be called on it.
Next, you will create an asynchronous method, shortenUrl
. The method will take a URL as an argument and return a shortened URL. To validate that the data fed into the method is valid, you’ll use a data-transfer object together with the class-validator
and class-transformer
packages to validate the data.
Before creating the asynchronous method, you will create the data-transfer object required by the shortenUrl
async method. A data-transfer object is an object that defines how data will be sent between applications.
First, create a dtos
(data-transfer objects) folder within your url
folder:
- mkdir src/url/dtos
Then, create a file named url.dto.ts
within that folder:
- nano src/url/dtos/url.dto.ts
Add the following code to the new file:
import { IsString, IsNotEmpty } from 'class-validator';
export class ShortenURLDto {
@IsString()
@IsNotEmpty()
longUrl: string;
}
You import the IsString
and IsNotEmpty
decorators from class-validator
. Then, you create and export the class ShortenURLDto
. Inside your ShortenURLDto
class, you create a longUrl
property and assign it a type of string
.
You also annotate the longUrl
property with the IsString
and IsNotEmpty
decorators. Annotating the longUrl
property with these decorators will ensure that longUrl
is always a string and is not empty.
Save and close the file.
Then, open your src/main.ts
file:
- nano src/main.ts
Add the highlighted pieces of code to the existing file:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { ValidationPipe } from '@nestjs/common';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalPipes(new ValidationPipe({ whitelist: true }));
await app.listen(3000);
}
bootstrap();
You import ValidationPipe
, which uses the class-validator
package to enforce validation rules on all data coming into your application.
Then, you call the useGlobalPipes
method on your application instance (app
) and pass an instance of ValidationPipe
with an options object where the whitelist
property is set to true
. The useGlobalPipes
method binds ValidationPipe
at the application level, ensuring that all routes are protected from incorrect data. Setting the whitelist
property to true
strips validated (returned) objects of properties that are not specified in your DTO.
Save and close the file.
Next, you’ll import your data-transfer object into the url.service.ts
file and apply it to the shortenUrl
method.
shortenUrl
MethodThe shortenUrl
method will handle most of the URL shortening logic. It will take a parameter url
of the type ShortenURLDto
.
First, open your src/url/url.service.ts
file:
- nano src/url/url.service.ts
Add the highlighted lines to the file:
import {
BadRequestException,
Injectable,
NotFoundException,
UnprocessableEntityException,
} from '@nestjs/common';
import { Repository } from 'typeorm';
import { InjectRepository } from '@nestjs/typeorm';
import { Url } from './url.entity';
import { ShortenURLDto } from './dtos/url.dto';
import { nanoid } from 'nanoid';
import { isURL } from 'class-validator';
...
First, you import NotFoundExeception
, BadRequestException
, and UnprocessableEntityException
from @nestjs/common
because you will use them for error handling. Then, you import {nanoid}
from nanoid
and isURL
from class-validator
. isURL
will be used to confirm if the supplied longUrl
is a valid URL. Finally, you import ShortenURLDto
from './dtos/url.dto'
for data-validation.
Then, add the following to the UrlService
class below the constructor
:
...
async shortenUrl(url: ShortenURLDto) {}
Then, add the following code to your shortenUrl
method:
...
const { longUrl } = url;
//checks if longurl is a valid URL
if (!isURL(longUrl)) {
throw new BadRequestException('String Must be a Valid URL');
}
const urlCode = nanoid(10);
const baseURL = 'http://localhost:3000';
try {
//check if the URL has already been shortened
let url = await this.repo.findOneBy({ longUrl });
//return it if it exists
if (url) return url.shortUrl;
//if it doesn't exist, shorten it
const shortUrl = `${baseURL}/${urlCode}`;
//add the new record to the database
url = this.repo.create({
urlCode,
longUrl,
shortUrl,
});
this.repo.save(url);
return url.shortUrl;
} catch (error) {
console.log(error);
throw new UnprocessableEntityException('Server Error');
}
In the code block above, the longUrl
was de-structured from the url
object. Then, using the isURL
method, a check will validate if longUrl
is a valid URL.
A urlCode
is generated using nanoid
. By default, nanoid
generates a unique string of twenty-one characters. Pass your desired length as an argument to override the default behavior. In this case, you pass in a value of 10
because you want the URL to be as short as possible.
Then, the base URL is defined. A base URL is the consistent root of your website’s address. In development, it is your local host server; in production, it is your domain name. For this tutorial, the sample code uses localhost
.
A try-catch
block will house all the code to interact with the database for error handling.
Shortening a URL twice could lead to duplicate data, so a find query is run on the database to see if the URL exists. If it exists, its shortUrl
will be returned; otherwise, the code progresses to shorten it. If no URL record is found in the database, a short URL is created by concatenating the baseURL
and the urlCode
.
Then, a url
entity instance is created with the urlCode
, the longUrl
, and the shortUrl
. The url
instance is saved to the database by calling the save
method on repo
and passing the instance as an argument. Then, the shortUrl
is returned.
Finally, if an error occurs, the error is logged to the console in the catch block, and an UnprocessableEntityException
message will be thrown.
This is what your url.service.ts
file will now look like:
import {
BadRequestException,
Injectable,
NotFoundException,
UnprocessableEntityException,
} from '@nestjs/common';
import { Repository } from 'typeorm';
import { InjectRepository } from '@nestjs/typeorm';
import { Url } from './url.entity';
import { ShortenURLDto } from './dtos/url.dto';
import { nanoid } from 'nanoid';
import { isURL } from 'class-validator';
@Injectable()
export class UrlService {
constructor(
@InjectRepository(Url)
private repo: Repository<Url>,
) {}
async shortenUrl(url: ShortenURLDto) {
const { longUrl } = url;
//checks if longurl is a valid URL
if (!isURL(longUrl)) {
throw new BadRequestException('String Must be a Valid URL');
}
const urlCode = nanoid(10);
const baseURL = 'http://localhost:3000';
try {
//check if the URL has already been shortened
let url = await this.repo.findOneBy({ longUrl });
//return it if it exists
if (url) return url.shortUrl;
//if it doesn't exist, shorten it
const shortUrl = `${baseURL}/${urlCode}`;
//add the new record to the database
url = this.repo.create({
urlCode,
longUrl,
shortUrl,
});
this.repo.save(url);
return url.shortUrl;
} catch (error) {
console.log(error);
throw new UnprocessableEntityException('Server Error');
}
}
}
Save the file.
Here, you set up the first part of the URL shortening logic. Next, you will implement the redirect
method in your service.
redirect
MethodThe redirect
method will contain the logic that redirects users to the long URL.
Still in the src/url/url/service.ts
file, add the following code to the bottom of your UrlService
class to implement the redirect
method:
...
async redirect(urlCode: string) {
try {
const url = await this.repo.findOneBy({ urlCode });
if (url) return url;
} catch (error) {
console.log(error);
throw new NotFoundException('Resource Not Found');
}
}
The redirect
method takes urlCode
as an argument and tries to find a resource in the database with a matching urlCode
. If the resource exists, it will return the resource. Else, it throws a NotFoundException
error.
Your completed url.service.ts
file will now look like this:
import {
BadRequestException,
Injectable,
NotFoundException,
UnprocessableEntityException,
} from '@nestjs/common';
import { Repository } from 'typeorm';
import { InjectRepository } from '@nestjs/typeorm';
import { Url } from './url.entity';
import { ShortenURLDto } from './dtos/url.dto';
import { nanoid } from 'nanoid';
import { isURL } from 'class-validator';
@Injectable()
export class UrlService {
constructor(
@InjectRepository(Url)
private repo: Repository<Url>,
) {}
async shortenUrl(url: ShortenURLDto) {
const { longUrl } = url;
//checks if longurl is a valid URL
if (!isURL(longUrl)) {
throw new BadRequestException('String Must be a Valid URL');
}
const urlCode = nanoid(10);
const baseURL = 'http://localhost:3000';
try {
//check if the URL has already been shortened
let url = await this.repo.findOneBy({ longUrl });
//return it if it exists
if (url) return url.shortUrl;
//if it doesn't exist, shorten it
const shortUrl = `${baseURL}/${urlCode}`;
//add the new record to the database
url = this.repo.create({
urlCode,
longUrl,
shortUrl,
});
this.repo.save(url);
return url.shortUrl;
} catch (error) {
console.log(error);
throw new UnprocessableEntityException('Server Error');
}
}
async redirect(urlCode: string) {
try {
const url = await this.repo.findOneBy({ urlCode });
if (url) return url;
} catch (error) {
console.log(error);
throw new NotFoundException('Resource Not Found');
}
}
}
Save and close the file.
Your URL shortening logic is now complete with two methods: one to shorten the URL and the other to redirect the shortened URL to the original URL.
In the next step, you will implement the route handler for these two methods in your controller class.
In this step, you’ll create two route handlers: a POST
route handler to handle shortening requests and a GET
route handler to handle redirection requests.
Before implementing the routes in your controller class, you must make the service available to your controller.
First, open your src/url/url.controller.ts
file:
- nano src/url/url.controller.ts
Add the highlighted lines to the file:
import { Controller } from '@nestjs/common';
import { UrlService } from './url.service';
@Controller('url')
export class UrlController {
constructor(private service: UrlService) {}
}
First, you import UrlService
from ./url.service
. Then, in your controller class, you declare a constructor and initialize a private variable, service
, as a parameter. You assign service
a type of UrlService
.
The Controller
decorator currently has a string, 'url'
, as an argument, which means the controller will only handle requests made to localhost/3000/url/route
. This behavior introduces a bug in your URL shortening logic as the shortened URLs include a base url (localhost:3000
) and a URL code (wyt4_uyP-Il
), which combine to form the new URL (localhost:3000/wyt4_uyP-Il
). Hence, this controller cannot handle the requests, and the shortened links will return a 404 Not Found
error. To resolve this, remove the 'url'
argument from the Controller decorator and implement individual routes for each handler.
After you remove the 'url'
argument, this is what your UrlController
will look like:
@Controller()
export class UrlController {
constructor(private service: UrlService) {}
}
Still in the src/url/url.controller.ts
file, add the highlighted items to the import
statement:
import { Body, Controller, Get, Param, Post, Res } from '@nestjs/common';
import { UrlService } from './url.service';
import { ShortenURLDto } from './dtos/url.dto';
You import Body
, Get
, Param
, Post
, and Res
from @nestjs/common
and ShortenURLDto
from ./dtos/url.dto
. The decorators will be further defined as you add to this file.
Then add the following lines to the UrlController
below the constructor
to define the POST
route handler:
...
@Post('shorten')
shortenUrl(
@Body()
url: ShortenURLDto,
) {
return this.service.shortenUrl(url);
}
You create a method shortenUrl
that takes an argument of url
with a type of ShortenURLDto
. You annotate url
with the Body
decorator to extract the body object from the request object and populate the url
variable with its value.
You then annotate the whole method with the Post
decorator and pass 'shorten'
as an argument. This handler will handle all Post requests made to localhost:<port>/shorten
. You then call the shortenUrl
method on the service
and pass url
as an argument.
Next, add the following lines below the POST
route to define the GET
route handler for redirection:
...
@Get(':code')
async redirect(
@Res() res,
@Param('code')
code: string,
) {
const url = await this.service.redirect(code);
return res.redirect(url.longUrl);
}
You create a redirect
method with two parameters: res
, annotated with the Res
decorator, and code
, annotated with the Param
decorator. The Res
decorator turns the class it annotates into an Express response object, allowing you to use library-specific commands like the Express redirect
method. The Param
decorator extracts the params
property from the req
object and populates the decorated parameter with its value.
You annotate the redirect
method with the Get
decorator and pass a wildcard parameter, ':code'
. You then pass 'code'
as an argument to the Param
decorator.
You then call the redirect
method on service
, await
the result, and store it in a variable, url
.
Finally, you return res.redirect()
and pass url.longUrl
as an argument. This method will handle GET
requests to localhost:<port>/code
, or your shortened URLs.
Your src/url/url.controller.ts
file will now look like this:
import { Body, Controller, Get, Param, Post, Res } from '@nestjs/common';
import { UrlService } from './url.service';
import { ShortenURLDto } from './dtos/url.dto';
@Controller()
export class UrlController {
constructor(private service: UrlService) {}
@Post('shorten')
shortenUrl(
@Body()
url: ShortenURLDto,
) {
return this.service.shortenUrl(url);
}
@Get(':code')
async redirect(
@Res() res,
@Param('code')
code: string,
) {
const url = await this.service.redirect(code);
return res.redirect(url.longUrl);
}
}
Save and close the file.
Now that you have defined your POST
and GET
route handlers, your URL shortener is fully functional. In the next step, you will test it.
In this step, you will test the URL shortener you defined in the previous steps.
First, start your application by running:
- npm run start
You will see the following output:
Output[Nest] 12640 - 06/08/2022, 16:20:04 LOG [NestFactory] Starting Nest application...
[Nest] 12640 - 06/08/2022, 16:20:07 LOG [InstanceLoader] AppModule dependencies initialized +2942ms
[Nest] 12640 - 06/08/2022, 16:20:07 LOG [InstanceLoader] TypeOrmModule dependencies initialized +1ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +257ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [InstanceLoader] TypeOrmModule dependencies initialized +2ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [InstanceLoader] UrlModule dependencies initialized +4ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [RoutesResolver] UrlController {/}: +68ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [RouterExplorer] Mapped {/shorten, POST} route +7ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [RouterExplorer] Mapped {/:code, GET} route +2ms
[Nest] 12640 - 06/08/2022, 16:20:08 LOG [NestApplication] Nest application successfully started +7ms
Open a new terminal to use curl
or your preferred API testing tool to make a POST
request to http://localhost:3000/shorten
with the data below or any data of your choice. For more information on using curl
, see the description in this tutorial.
Run this command to make a sample POST
request:
- curl -d "{\"longUrl\":\"http://llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch.co.uk\"}" -H "Content-Type: application/json" http://localhost:3000/shorten
The -d
flag registers the HTTP POST
request data, and the -H
flag sets the headers for the HTTP
request. Running this command will send the long URL to your application and return a shortened URL.
You will receive a short URL as a response, such as the following example:
http://localhost:3000/MWBNHDiloW
Finally, copy the short URL and paste the link into your browser. Then press ENTER
. You will be redirected to the original resource.
In this article, you created a URL shortener with NestJS. If you add a front-end interface, you can deploy it for public use. You can review the complete project on Github.
NestJS provides the type-safety and architecture to make your application more secure, maintainable, and scalable. To learn more, visit the NestJS official documentation.
]]>Playwright is a great tool for end-to-end testing across browsers, including Chromium, Firefox, and Webkit. Since Webkit is the core of the Safari browser, Playwright’s cross-browser functionality makes it a good option for testing web apps. Playwright has features like auto-support interaction with browsers, so you don’t have to install the web drivers manually, and it supports multiple programming languages, such as Java, Python, and NodeJS. Playwright’s flexibility means it can be used as a web scraping tool or for end-to-end testing to ensure software meets its requirements.
To run Playwright, you need an appropriate environment, such as NodeJS runtime, Playwright core framework, or Playwright test runner. Your operating system might need dependencies to support Playwright. Docker, an open-source containerization platform, can serve your Playwright environment so that you don’t need to create multiple environments for different operating systems.
In this tutorial, you will set up an environment to use Playwright with Typescript for end-to-end testing, write and execute the tests, export the test report in multiple forms, and deploy the test using Docker. By the end of the tutorial, you will be able to use Playwright for your automation testing and to integrate your tests into an existing CI/CD pipeline with Docker wrapping the test environment.
To follow along with this tutorial, you will need:
docker run hello-world
to ensure that Docker is properly installed and ready to use.nano
throughout.Before implementing the end-to-end tests, you must prepare the Playwright project environment.
First, create a folder for this project:
- mkdir playwright-with-docker
Move to the new folder:
- cd playwright-with-docker
Then initialize a new Node environment:
- npm init
You will be prompted to provide information for the new project, such as the project name, version, the entry of the application, and the test command.
You will be prompted to input answers for the following prompts related to the new project:
Outputpackage name: (playwright-docker)
version: (1.0.0)
description:
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
You will see results like this:
OutputThis utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.
See `npm help init` for definitive documentation on these fields
and exactly what they do.
Use `npm install <pkg>` afterward to install a package and
save it as a dependency in the package.json file.
Press ^C at any time to quit.
package name: (test) playwright-docker
version: (1.0.0)
description: A project for using playwright for end-to-end testing purpose with docker for deployment
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to /your-path/test/package.json:
{
"name": "playwright-docker",
"version": "1.0.0",
"description": "A project for using playwright for end-to-end testing purpose with docker for deployment",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author":
"license": "ISC"
}
Is this OK? (yes) yes
After adding the requisite information, type yes
or press enter
to confirm the setup for the package.json
file.
Next, install the needed dependencies for the project:
- npm install --save-dev playwright
- npm install --save-dev typescript
These commands install Playwright and TypeScript within the sample project. The flag --save-dev
is used to install dependencies that are not mandatory for the application to run.
Next, install type definitions for Node.JS:
- npm install --save-dev @types/node
Next, install a library to work with the TOML file for configuration:
- npm install --save-dev toml
TOML is one of the file types used for application configuration. A .toml
file is human readable and allows the application to update its content without reading it first.
Next, install the Playwright dependencies for the host system:
- npx playwright install-deps
When prompted, enter your sudo
password.
Note: If you receive a message that you need to upgrade your Node version to a higher version like this:
OutputYou are running Node.js 10.19.0.
Playwright requires Node.js 12 or higher.
Please update your version of Node.js.
You can upgrade using these commands:
- sudo npm cache clean -f
- sudo npm install -g n
- sudo n stable
npm cache clean -f
will forcibly clean all the caches while
npm install -g n
and n stable
will install Node.js stable version globally in your server. After running these commands, restart your server.
Then, install the Playwright test runner, which you will use in later steps of this tutorial:
- npm install --save-dev @playwright/test
Finally, install the supported browsers for Playwright:
- npx playwright install
With this command, you can run your tests with multiple browsers.
To prepare the TypeScript config file, open tsconfig.json
with nano
or your preferred text editor:
- sudo nano tsconfig.json
The current file is empty. To update it for this project, add the following code:
{
"compilerOptions": {
"strict": true,
"module": "commonjs"
},
"include": ["tests"]
}
A tsconfig.json
file tells NodeJS runtime that the current directory is a Typescript project. The compilerOptions
lists the conditions NodeJS needs to compile the project. The module
tells the compiler what module syntax to use when the files are compiled to Javascript. The strict
fields set to true
will enable type checking for Typescript code, which guarantees the types will match the data value of variables or methods. include
will show a list of file names or patterns that are included in the application. In this case, the application should include all the files in the tests
directory.
Save and close the file when finished.
With your environment set up, you can now begin building your tests.
With the Playwright testing environment that you prepared in the first step, you will now write three example tests connected to the DigitalOcean Droplets Page.
The TypeScript tests that you will build will verify the following three items:
Sign up with Email
, Sign up with Google
, Sign up with Github
.While you can have all three tests in the same test file, this tutorial will use three separate files because each test serves a different purpose.
Create a new directory named tests
to hold all the test files:
mkdir tests
Then navigate to the tests
directory:
cd tests
Because you will run three tests with different purposes, you will create three separate test files later in this step that are all located within the tests
directory in the project:
signUpMethods.spec.ts
will implement the test for verifying the number of supported methods for users to sign up.multiplePackages.spec.ts
will implement the test to verify the number of packages customers can choose.pricingComparison.spec.ts
will verify the number of basic virtual machine costs.Note: The default format of the test files will be *.spec.ts
(for TypeScript projects) or *.spec.js
(for JavaScript projects).
The configuration file for the tests will be named configTypes.ts
and is also put in the tests
directory. In this file, you will define global variables for interacting with multiple browsers and their pages. You will also define some configuration values used in the test, such as the URL of the application being tested. This tutorial will use DIGITAL_OCEAN_URL
for the URL being tested.
Create configTypes.ts
:
- nano configTypes.ts
Add the following code to the currently empty configTypes.ts
file:
import { Browser, Page } from "playwright";
import fs from 'fs';
import toml from 'toml';
const config = toml.parse(fs.readFileSync('./config.toml', 'utf-8'));
declare global {
const page: Page;
const browser: Browser;
const browserName: string;
}
export default {
DIGITAL_OCEAN_URL: config.digital_ocean_url ?? '',
};
First, the import
functions read the configuration content from ./config.toml
at the project’s home directory.
You declare global variables for page
, browser
, browserName
, which will be used for initializing page and browser instances in the end-to-end tests.
Finally, you export DIGITAL_OCEAN_URL
with the value read from ./config.toml
by digital_ocean_url
key, so that you can use this URL in your tests later.
Save and close the file when finished.
For the first test, create and open the signUpMethods.spec.ts
file using nano
or your preferred text editor:
- nano signUpMethods.spec.ts
Add the following code to the empty file:
import endpoint from "./configTypes"
import { test, expect } from '@playwright/test'
test("Expect to have 3 options for signing up", async ({ page }) => {
// Go to the Droplets product page of DigitalOcean web page
await page.goto(endpoint.DIGITAL_OCEAN_URL);
// Wait for the page to load
await page.waitForLoadState('networkidle');
// Get the number of signUp options
const number_subscriptions_allowed = await page.locator('.SignupButtonsStyles__ButtonContainer-sc-yg5bly-0 > a').count()
// Verify that number equals 3
expect(number_subscriptions_allowed).toBe(3)
});
The signUpMethods.spec.ts
file contains the code for the test that assesses whether the Droplets Page has three options for sign up. You import the test
and expect
methods in the first two lines.
Tests can be written asynchronously or synchronously. Writing a test in an asynchronous manner helps optimize the speed of the test since you do not have to wait for each step in the test to finish in order to execute the next step. You use the await
keyword when you need to wait for the step to finish before moving to the following action. Since the steps here are related to web interactions, you need to ensure that each element in the user interface is displayed before executing the action, which is why you include the await
method before every action is called.
The test is defined in the test
block with four actions. The first await
keyword uses the page.goto()
function to tell the test to go to the DIGITAL_OCEAN_URL
that has been defined in the configTypes.ts
file. You put the global variable page
in the async
declaration so you can use page instances throughout the test without needing to initialize it.
The second await
keyword tells the test to wait for the page to load using the page.waitForLoadState()
function. If there are API calls that have not finished, there may be elements on the page that are not available, and, as a result, the test could fail because it can’t find that element.
You define the number_subscriptions_allowed
to use the the page.locator()
function to look for the number of sign-up options. You find the signUp options
components by CSS selectors (in this case, the sign-up buttons), which allows you to get the number of child elements it contains.
Finally, an expect
method will validate the number of options found by page.locator()
with the expected output of 3
.
Save and close the file.
Next, you will write the second test. Create and open the multiplePackages.spec.ts
file:
- nano multiplePackages.spec.ts
In the empty file, add the following code:
import endpoint from "./configTypes"
import { test, expect } from '@playwright/test'
test("Expect to have 3 packages for subscription", async ({ page }) => {
// Go to the Droplets product page of DigitalOcean web page
await page.goto(endpoint.DIGITAL_OCEAN_URL);
// Wait for the page to load
await page.waitForLoadState('networkidle');
// Get the number of packages to be 2 (Basic and Premium)
const number_subscriptions_allowed = await page.locator('.CPUInfoStyles__StyledLeftCpuInfo-sc-ooo7a2-4 > div').count()
// Verify that number equals 2
expect(number_subscriptions_allowed).toBe(2)
});
Similar to the signUpMethods.spec.ts
file, you will import the test configuration and test functions from Playwright dependencies.
In this test, you go to the DIGITAL_OCEAN_URL
first by using page.goto()
. Then you wait for the page to complete all the network calls with page.waitForLoadState()
.
You find the child elements of the subscription component in the web UI and store that information in the number_subscriptions_allowed
variable.
Finally, you compare the value of number_subscriptions_allowed
with the expected output of 2
.
Save and close the file when finished.
Then, create and open the pricingComparison.spec.ts
file to define the third test:
- nano pricingComparison.spec.ts
Add the following code to the empty file:
import endpoint from "./configTypes"
import { test, expect } from '@playwright/test'
test("Expect to have 3 packages for subscription", async ({ page }) => {
// Go to the Droplets product page of DigitalOcean web page
await page.goto(endpoint.DIGITAL_OCEAN_URL);
// Wait for the page to load
await page.waitForLoadState('networkidle');
// Get the number of basic virtual machine costs (1 CPU, 2 CPU, 4 CPU, 8 CPU)
const number_subscriptions_allowed = await page.locator('.PricingComparisonToolStyles__StyledCpuSelector-sc-1k0sndv-7 > button').count()
// Verify that number equals 4
expect(number_subscriptions_allowed).toBe(4)
});
The async function in this test uses the same page.goto()
URL and page.waitForLoadState()
directions as in the previous tests. Because this test is connected to the subscription packages available on the Droplets page, the second half of the code block sets up that test.
For this test, you get the number of child elements for the pricing options component and store that value in the number_subscriptions_allowed
variable. You validate that the value of number_subscriptions_allowed
must equal 4
(the number of subscriptions currently supported).
Save and close the file.
In your tests, you use DIGITAL_OCEAN_URL
from the configTypes.ts
, and configTypes.ts
reads digital_ocean_url
value from the ./config.toml
file.
You will now create the config.toml
file in the project’s home directory. Navigate to the home directory:
- cd ..
Then create the config.toml
file:
- nano config.toml
Copy the following content into the config.toml
file:
digital_ocean_url="https://www.digitalocean.com/products/droplets"
Save and close the file.
The directory tree of the project now will look like this:
In this step, you wrote the three tests that you will use. You also defined the config.toml
file that the tests rely on. You will execute the tests in the next step.
There are many options for using the Playwright test runner in the CLI, such as running all tests with all browsers, disabling parallelization, running a set of test files, and running in debug mode, among others. In this step, you will run the tests with all browsers.
First, run this command:
- npx playwright test --browser=all
You should be able to see the test results like so:
OutputRunning 9 tests using 1 worker
✓ [chromium] › tests/multiplePackages.spec.ts:4:1 › Expect to have 3 packages for subscription (6s)
✓ [chromium] › tests/pricingComparison.spec.ts:4:1 › Expect to have 3 packages for subscription (4s)
✓ [chromium] › tests/signUpMethods.spec.ts:4:1 › Expect to have 3 options for signing up (3s)
✓ [firefox] › tests/multiplePackages.spec.ts:4:1 › Expect to have 3 packages for subscription (9s)
✓ [firefox] › tests/pricingComparison.spec.ts:4:1 › Expect to have 3 packages for subscription (5s)
✓ [firefox] › tests/signUpMethods.spec.ts:4:1 › Expect to have 3 options for signing up (7s)
✓ [webkit] › tests/multiplePackages.spec.ts:4:1 › Expect to have 3 packages for subscription (7s)
✓ [webkit] › tests/pricingComparison.spec.ts:4:1 › Expect to have 3 packages for subscription (6s)
✓ [webkit] › tests/signUpMethods.spec.ts:4:1 › Expect to have 3 options for signing up (6s)
9 passed (1m)
The checkmark indicates that all tests have passed in the three browsers (Chromium, Firefox, and Webkit).
The number of workers will depend on the number of cores that the current server is using and the current configuration for the test. You can limit the number of workers by setting the workers
value in the playwright.config.ts
file. For more information on test configuration, you can read the Playwright product docs.
The Playwright test runner provides several options for the test report that can be integrated into CI tools such as Jenkins or CircleCI. For more information on test reports, see the Playwright test reporters documentation page.
For this tutorial, you will run the test with the HTML report file, which provides easier readability than viewing tests in the CLI.
Run this command for the HTML test report:
- npx playwright test --browser=all --reporter=html
You will see a result like this:
OutputRunning 9 tests using 2 workers
9 passed (40s)
To open last HTML report run:
npx playwright show-report
To view the HTML report, run:
- npx playwright show-report
You will see an output like this:
OutputServing HTML report at http://your_ip_address:9323. Press Ctrl+C to quit.
You should now be able to access your report via port 9323
.
Note: If you are accessing the server remotely, you will need to expose your remote server to the current local machine to view the test report in your local browser. In a new terminal session on your local machine, run the following command:
- ssh -L 9323:localhost:9323 your_non_root_user@your_server_ip
SSH port forwarding will forward the server port to the local port. The -L 9323:localhost:9323
section identifies that port 9323
on the local machine will be forwarded to the same port on the remote server.
You should now be able to view the test report by navigating to http://localhost:9323
in a browser on your local machine.
When your report loads in the browser, you will observe that each test has been run on three browsers: Chromium, Firefox, and Webkit. You will know how long each test on each browser took to run, as well as how long the entire test took.
Click the report name to see the details.
In the details section, the test execution steps will feature Before Hooks
and After Hooks
steps by default. The Before Hooks
section is often used for initial setup, such as logging into the console or reading test data. After the test execution, the After Hooks
section will often clean test data in the test environment. There are details for each step in the test, including visiting the URL with page.goto()
, waiting for the page to load with page.waitForLoadState()
, counting the sign-up methods with locator.count()
, and verifying the values match with expect.toBe
.
In this step, you ran all three tests, reviewed their pass state, and viewed the test results in both CLI and HTML formats. Next, you will automate the tests with Docker.
When implementing test automation, you may face environmental issues. Some tests will run as expected in the local machine of a test engineer but then fail when integrated into CI/CD pipeline due to environment compatibility problems. To avoid this issue, you can use Docker containers to run automation testing, which you will set up in this step. If the testing runs as expected in the local environment with Docker, there is a high probability you can avoid compatibility issues in the CI/CD pipeline.
First, you will update the package.json
file to add the necessary test scripts that will run later in Docker. Open the file:
- nano package.json
Add the highlighted lines to the scripts
section in the package.json
file:
...
"scripts": {
"test": "playwright test --browser=all",
"test-html-report": "playwright test --browser=all --reporter=html",
"test-json-report": "PLAYWRIGHT_JSON_OUTPUT_NAME=results.json playwright test --browser=chromium --reporter=json"
},
These scripts will run the custom tests instead of typing out the full command. When you need to run the test with the reporter display in HTML, you will now be able to run this command:
- npm run test-html-report
In place of the full command:
- npx playwright test --browser=all --reporter=html
Your current package.json
will look like this:
{
"name": "playwright-docker",
"version": "1.0.0",
"description": "A project for using playwright for end-to-end testing purpose with docker for deployment",
"main": "index.js",
"scripts": {
"test": "playwright test --browser=all",
"test-html-report": "playwright test --browser=all --reporter=html",
"test-json-report": "PLAYWRIGHT_JSON_OUTPUT_NAME=results.json playwright test --browser=chromium --reporter=json"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@playwright/test": "^1.22.2",
"@types/node": "^17.0.35",
"playwright": "^1.22.1",
"toml": "^3.0.0",
"typescript": "^4.6.4"
}
}
Save and close the file.
Next, create and open a Dockerfile in the current directory:
- nano Dockerfile
Then add the following content to it:
# Get the base image of Node version 16
FROM node:16
# Get the latest version of Playwright
FROM mcr.microsoft.com/playwright:focal
# Set the work directory for the application
WORKDIR /app
# Set the environment path to node_modules/.bin
ENV PATH /app/node_modules/.bin:$PATH
# COPY the needed files to the app folder in Docker image
COPY package.json /app/
COPY tests/ /app/tests/
COPY tsconfig.json /app/
COPY config.toml /app/
# Get the needed libraries to run Playwright
RUN apt-get update && apt-get -y install libnss3 libatk-bridge2.0-0 libdrm-dev libxkbcommon-dev libgbm-dev libasound-dev libatspi2.0-0 libxshmfence-dev
# Install the dependencies in Node environment
RUN npm install
First, you get the base image of Node version 16 and Playwright version focal
to put in your Docker image. The tests require Node and Playwright to run.
Then, you set the project directory name in the container. In this case, it is WORKDIR
. Setting WORKDIR /app
will put all your files inside the /app
directory inside the container.
You set the environment path for the Docker container with ENV PATH
. In this case, you set it to the node_modules
directory.
Then, you copy all the necessary files to the /app
directory in the Docker image.
Because Playwright requires some dependencies to run, you will also install those dependencies in the Docker image.
Save and close the file.
Next, you will build the image for your automation project:
- docker build -t playwright-docker .
Docker will find Dockerfile
in the current directory and build the image by following the instructions inside Dockerfile
. The -t
flag tags the Docker image by naming it playwright-docker
. The .
tells Docker to look for Dockerfile
in this current directory. You can review the Docker product docs for more about building Docker images.
The build output (shortened for brevity) will look similar to this:
OutputSending build context to Docker daemon 76.61MB
...
added 6 packages, and audited 7 packages in 6s
found 0 vulnerabilities
Removing intermediate container 87520d179fd1
---> 433ae116d06a
Successfully built 433ae116d06a
Successfully tagged playwright-docker:latest
The tests might not run correctly on Windows or MacOS due to conflict dependencies or missing dependencies during initial setup, but using Docker to run the tests should prevent these environment configuration problems. With Docker, your base image contains all required dependencies. The tests can be run on different operating systems as long as Docker is installed.
Check if the Docker image was created successfully:
- docker image ls
The result should be similar to this:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
playwright-docker latest 433ae116d06a 5 minutes ago 1.92GB
mcr.microsoft.com/playwright focal bb9872cfd272 2 days ago 1.76GB
node 16 c6b745e900c7 6 days ago 907MB
You will have playwright-docker
(the test image), microsoft playwright
, and node
images. You may also have images for ubuntu
and hello-world
from the Docker installation prerequisite.
Now run the test command in your Docker container using docker run
:
- docker run -it playwright-docker:latest npm run test
docker run
will run the specified Docker image with the command. In this example, the image is playwright-docker:latest
and the command is npm run test
. docker run
will first bring up the Docker container, then run the needed command. You can check out more in the Docker product docs.
The result will look like this:
Output> playwright-docker@1.0.0 test
> playwright test --browser=all
Running 9 tests using 2 workers
✓ [chromium] › tests/pricingComparison.spec.ts:4:1 › Expect to have 4 pricing options (7s)
✓ [chromium] › tests/multiplePackages.spec.ts:4:1 › Expect to have 2 packages for subscription (8s)
✓ [chromium] › tests/signUpMethods.spec.ts:4:1 › Expect to have 3 options for signing up (8s)
✓ [firefox] › tests/multiplePackages.spec.ts:4:1 › Expect to have 2 packages for subscription (9s)
✓ [firefox] › tests/pricingComparison.spec.ts:4:1 › Expect to have 4 pricing options (8s)
✓ [firefox] › tests/signUpMethods.spec.ts:4:1 › Expect to have 3 options for signing up (5s)
✓ [webkit] › tests/multiplePackages.spec.ts:4:1 › Expect to have 2 packages for subscription (8s)
✓ [webkit] › tests/pricingComparison.spec.ts:4:1 › Expect to have 4 pricing options (10s)
✓ [webkit] › tests/signUpMethods.spec.ts:4:1 › Expect to have 3 options for signing up (7s)
9 passed (41s)
The tests have now successfully run in the Docker environment. You can safely update the code to the remote repository, and system administrators can integrate the automation test into the CI/CD pipeline.
You can also review the files created in this article in this repository.
You have now used Playwright for end-to-end testing and deployed the tests with Docker. For more about Playwright, visit the Playwright documentation.
You can read about the Docker ecosystem to learn more about Docker. Docker product documentation also has some best practices for writing Dockerfiles and a Dockerfile reference guide. To continue your work with Docker, you can try out other Docker tutorials.
]]>Node.js is a popular JavaScript runtime environment that helps you work with front-end JavaScript libraries such as React, Angular, and Vue. You can also build full-stack applications using Express and Nest frameworks. To build JavaScript applications, you will need a local Node environment.
In this tutorial, you will set up a local Node.js programming environment for your Windows computer.
You will need a desktop or laptop computer running Windows 10 with administrative access and an internet connection.
Node Version Manager or NVM is the preferred method to install Node.js on your computer. NVM lets you maintain multiple versions of Node.js at once, which is helpful if you need to use specific Node versions for different projects. NVM has a Windows version that you will use to install Node.js in this step.
Visit the NVM-windows releases page to acquire the latest version. As of writing this tutorial, the latest NVM version is 1.1.9.
Scroll to the Assets section and click on nvm-setup.exe
to download the setup file to your computer’s downloads folder:
After the download finishes, go to your downloads location and double-click the nvm-setup.exe
file to start the installation process.
The installation wizard will load and provide options to select, such as the destination folder for the tool:
Follow the installation prompts to install NVM on your computer.
Next, open the Terminal, Command Prompt, or PowerShell as Administrator on your computer.
Use this command to verify the NVM installation:
- nvm -v
You will see the following output with the NVM version number:
OutputRunning version 1.1.9.
...
You can view which Node versions are available for you to install with this command:
- nvm list available
You will see a list of Node versions:
Output| CURRENT | LTS | OLD STABLE | OLD UNSTABLE |
|--------------|--------------|--------------|--------------|
| 18.7.0 | 16.16.0 | 0.12.18 | 0.11.16 |
| 18.6.0 | 16.15.1 | 0.12.17 | 0.11.15 |
| 18.5.0 | 16.15.0 | 0.12.16 | 0.11.14 |
| 18.4.0 | 16.14.2 | 0.12.15 | 0.11.13 |
| 18.3.0 | 16.14.1 | 0.12.14 | 0.11.12 |
| 18.2.0 | 16.14.0 | 0.12.13 | 0.11.11 |
| 18.1.0 | 16.13.2 | 0.12.12 | 0.11.10 |
| 18.0.0 | 16.13.1 | 0.12.11 | 0.11.9 |
| 17.9.1 | 16.13.0 | 0.12.10 | 0.11.8 |
| 17.9.0 | 14.20.0 | 0.12.9 | 0.11.7 |
| 17.8.0 | 14.19.3 | 0.12.8 | 0.11.6 |
| 17.7.2 | 14.19.2 | 0.12.7 | 0.11.5 |
| 17.7.1 | 14.19.1 | 0.12.6 | 0.11.4 |
| 17.7.0 | 14.19.0 | 0.12.5 | 0.11.3 |
| 17.6.0 | 14.18.3 | 0.12.4 | 0.11.2 |
| 17.5.0 | 14.18.2 | 0.12.3 | 0.11.1 |
| 17.4.0 | 14.18.1 | 0.12.2 | 0.11.0 |
| 17.3.1 | 14.18.0 | 0.12.1 | 0.9.12 |
| 17.3.0 | 14.17.6 | 0.12.0 | 0.9.11 |
| 17.2.0 | 14.17.5 | 0.10.48 | 0.9.10 |
Node has two major versions: Current and LTS for long-term support. For development purposes, it’s recommended to install the LTS version. You can also read more about which Node version to use.
You will then install the latest LTS version from this list with the following command:
- nvm install 16.16.0
Node.js version 16.16.0 will be installed on your computer:
OutputDownloading node.js version 16.16.0 (64-bit)...
Extracting...
Complete
Installation complete. If you want to use this version, type
nvm use 16.16.0
Review the Node versions installed on your computer:
- nvm list
You will see a list with the available Node versions:
Output 16.16.0
* 16.15.0 (Currently using 64-bit executable)
14.16.0
8.12.0
If you have more than one version installed, you can select a different version from this list with nvm use
, specifying the version you would like to use:
- nvm use 16.16.0
You will see an output like this:
OutputNow using node v16.16.0 (64-bit)
Use the following command to verify the Node version:
- node --version
You will see the Node version in the output:
Outputv16.16.0
Node also installs the Node Package Manager (NPM) to install and manage Node packages. Use the following command to verify the NPM version:
- npm --version
You will see the NPM version in the output:
Output8.11.0
In this step, you installed Node. To complete your local development environment setup, you will also need Git Bash on your Windows computer, which you will install in the next step.
In this step, you will install Git Bash on your computer. Git is a popular version control system, while Bash is a popular terminal program for the Linux operating system.
As a Windows user, you can do most tasks with the built-in Windows command prompt or PowerShell. However, Linux-based commands are the standard in modern development workflows. By using and learning Bash commands, you will be able to follow the majority of programming tutorials.
If you are running Windows 11 or have the latest development version of Windows 10, you can install Git using the winget
command line utility:
- winget install --id Git.Git -e --source winget
The winget
tool is the client interface to the Windows Package Manager service.
The --id
flag tells winget
to install a package identified by its unique ID. The -e
or exact
flag requires case sensitivity. The --source
flag ensures installation from the given source: in this case, the winget
repository.
You can also install Git Bash with the installation wizard by visiting Git’s website:
If you choose to use the installation wizard, you can run the installation file with the default settings when it finishes downloading:
To verify your Git installation, run the following command:
- git --version
You will see the version:
Outputgit version 2.30.2.windows.1
With the necessary tools on your computer, you can now create a simple Node.js program to test that everything works as expected.
In this step, you will create a simple “Hello, World” app to test the Node.js runtime.
Open the Git Bash app you just installed. Then use the following command to create a new file with nano
, a command-line text editor:
- nano hello.js
Alternatively, you can open this file in your preferred editor, such as VSCode.
Add the following lines to the hello.js
file:
let message = "Hello, World!";
console.log(message);
First, you define the message
variable with a string of Hello, World!
. Then, console.log
will display the contents of the message
variable when the file is run.
Save and close the file.
Now run this program with Node:
- node hello.js
The program executes and displays its output to the screen:
OutputHello, World!
Node.js allows you to execute JavaScript code without a browser, which is why you could run the hello.js
file.
Node is a robust JavaScript runtime environment. In this tutorial, you created your local Node development environment in Windows 10.
Now that you have your local development environment set up in Windows, you can set up a Node server and start building front-end applications by following our tutorials for React, Angular, and Vue.js. For full-stack development, you can set up projects in Express.
]]>There are multiple ways to enhance the flexibility and security of your Node.js application. Using a reverse proxy like Nginx offers you the ability to load balance requests, cache static content, and implement Transport Layer Security (TLS). Enabling encrypted HTTPS on your server ensures that communication to and from your application remains secure.
Implementing a reverse proxy with TLS/SSL on containers involves a different set of procedures from working directly on a host operating system. For example, if you were obtaining certificates from Let’s Encrypt for an application running on a server, you would install the required software directly on your host. Containers allow you to take a different approach. Using Docker Compose, you can create containers for your application, your web server, and the Certbot client that will enable you to obtain your certificates. By following these steps, you can take advantage of the modularity and portability of a containerized workflow.
In this tutorial, you will deploy a Node.js application with an Nginx reverse proxy using Docker Compose. You will obtain TLS/SSL certificates for the domain associated with your application and ensure that it receives a high security rating from SSL Labs. Finally, you will set up a cron
job to renew your certificates so that your domain remains secure.
To follow this tutorial, you will need:
An Ubuntu 18.04 server, a non-root user with sudo
privileges, and an active firewall. For guidance on how to set these up, please read this Initial Server Setup guide.
Docker and Docker Compose installed on your server. For guidance on installing Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04. For guidance on installing Compose, follow Step 1 of How To Install Docker Compose on Ubuntu 18.04.
A registered domain name. This tutorial will use your_domain throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:
your_domain
pointing to your server’s public IP address.www.your_domain
pointing to your server’s public IP address.Once you have everything set up, you’re ready to begin the first step.
As a first step, you’ll clone the repository with the Node application code, which includes the Dockerfile to build your application image with Compose. Then you’ll test the application by building and running it with the docker run
command, without a reverse proxy or SSL.
In your non-root user’s home directory, clone the nodejs-image-demo
repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.
Clone the repository into a directory. This example uses node_project
as the directory name. Feel free to name this directory to your liking:
- git clone https://github.com/do-community/nodejs-image-demo.git node_project
Change into the node_project
directory:
- cd node_project
In this directory, there is a Dockerfile that contains instructions for building a Node application using the Docker node:10
image and the contents of your current project directory. You can preview the contents of the Dockerfile with the following:
- cat Dockerfile
OutputFROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
These instructions build a Node image by copying the project code from the current directory to the container and installing dependencies with npm install
. They also take advantage of Docker’s caching and image layering by separating the copy of package.json
and package-lock.json
, containing the project’s listed dependencies, from the copy of the rest of the application code. Finally, the instructions specify that the container will be run as the non-root node user with the appropriate permissions set on the application code and node_modules
directories.
For more information about this Dockerfile and Node image best practices, please explore the complete discussion in Step 3 of How To Build a Node.js Application with Docker.
To test the application without SSL, you can build and tag the image using docker build
and the -t
flag. This example names the image node-demo
, but you are free to name it something else:
- docker build -t node-demo .
Once the build process is complete, you can list your images with docker images
:
- docker images
The following output confirms the application image build:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
node-demo latest 23961524051d 7 seconds ago 73MB
node 10-alpine 8a752d5af4ce 3 weeks ago 70.7MB
Next, create the container with docker run
. Three flags are included with this command:
-p
: This publishes the port on the container and maps it to a port on your host. You will use port 80
on the host in this example, but feel free to modify this as necessary if you have another process running on that port. For more information about how this works, review this discussion in the Docker documentation on port binding.-d
: This runs the container in the background.--name
: This allows you to give the container a memorable name.Run the following command to build the container:
- docker run --name node-demo -p 80:8080 -d node-demo
Inspect your running containers with docker ps
:
- docker ps
The following output confirms that your application container is running:
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo
You can now visit your domain to test your setup: http://your_domain
. Remember to replace your_domain
with your own domain name. Your application will display the following landing page:
Now that you have tested the application, you can stop the container and remove the images. Use docker ps
to get your CONTAINER ID
:
- docker ps
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo
Stop the container with docker stop
. Be sure to replace the CONTAINER ID
listed here with your own application CONTAINER ID
:
- docker stop 4133b72391da
You can now remove the stopped container and all of the images, including unused and dangling images, with docker system prune
and the -a
flag:
- docker system prune -a
Press y
when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.
With your application image tested, you can move on to building the rest of your setup with Docker Compose.
With our application Dockerfile in place, you’ll create a configuration file to run your Nginx container. You can start with a minimal configuration that will include your domain name, document root, proxy information, and a location block to direct Certbot’s requests to the .well-known
directory, where it will place a temporary file to validate that the DNS for your domain resolves to your server.
First, create a directory in the current project directory, node_project
, for the configuration file:
- mkdir nginx-conf
Create and open the file with nano
or your favorite editor:
- nano nginx-conf/nginx.conf
Add the following server block to proxy user requests to your Node application container and to direct Certbot’s requests to the .well-known
directory. Be sure to replace your_domain
with your own domain name:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name your_domain www.your_domain;
location / {
proxy_pass http://nodejs:8080;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
This server block will allow you to start the Nginx container as a reverse proxy, which will pass requests to your Node application container. It will also allow you to use Certbot’s webroot plugin to obtain certificates for your domain. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.
Once you have finished editing, save and close the file. If you used nano
, you can do this by pressing CTRL + X
, then Y
, and ENTER
. To learn more about Nginx server and location block algorithms, please refer to this article on Understanding Nginx Server and Location Block Selection Algorithms.
With the web server configuration details in place, you can move on to creating your docker-compose.yml
file, which will allow you to create your application services and the Certbot container you will use to obtain your certificates.
The docker-compose.yml
file will define your services, including the Node application and web server. It will specify details like named volumes, which will be critical to sharing SSL credentials between containers, as well as network and port information. It will also allow you to specify commands to run when your containers are created. This file is the central resource that will define how your services will work together.
Create and open the file in your current directory:
- nano docker-compose.yml
First, define the application service:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
The nodejs
service definition includes the following:
build
: This defines the configuration options, including the context
and dockerfile
, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image
instruction instead, with information about your username, repository, and image tag.context
: This defines the build context for the application image build. In this case, it’s the current project directory which is represented with the .
.dockerfile
: This specifies the Dockerfile that Compose will use for the build — the Dockerfile reviewed at in Step 1.image
, container_name
: These apply names to the image and container.restart
: This defines the restart policy. The default is no
, but in this example, the container is set to restart unless it is stopped.Note that you are not including bind mounts with this service, since your setup is focused on deployment rather than development. For more information, please read the Docker documentation on bind mounts and volumes.
To enable communication between the application and web server containers, add a bridge network called app-network
after the restart definition:
services:
nodejs:
...
networks:
- app-network
A user-defined bridge network like this enables communication between containers on the same Docker daemon host. This streamlines traffic and communication within your application, since it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, you can be selective about opening only the ports you need to expose your frontend services.
Next, define the webserver
service:
...
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
networks:
- app-network
Some of the settings defined here for the nodejs
service remain the same, but some of the following changes were made:
image
: This tells Compose to pull the latest Alpine-based Nginx image from Docker Hub. For more information about alpine
images, please read Step 3 of How To Build a Node.js Application with Docker.ports
: This exposes port 80
to enable the configuration options you’ve defined in your Nginx configuration.The following named volumes and bind mounts are also specified:
web-root:/var/www/html
: This will add your site’s static assets, copied to a volume called web-root
, to the the /var/www/html
directory on the container../nginx-conf:/etc/nginx/conf.d
: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes you make to files on the host will be reflected in the container.certbot-etc:/etc/letsencrypt
: This will mount the relevant Let’s Encrypt certificates and keys for your domain to the appropriate directory on the container.certbot-var:/var/lib/letsencrypt
: This mounts Let’s Encrypt’s default working directory to the appropriate directory on the container.Next, add the configuration options for the certbot
container. Be sure to replace the domain and email information with your own domain name and contact email:
...
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email sammy@your_domain --agree-tos --no-eff-email --staging -d your_domain -d www.your_domain
This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc
, the Let’s Encrypt working directory in certbot-var
, and the application code in web-root
.
Again, you’ve used depends_on
to specify that the certbot
container should be started once the webserver
service is running.
The command
option specifies the command to run when the container is started. It includes the certonly
subcommand with the following options:
--webroot
: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication.--webroot-path
: This specifies the path of the webroot directory.--email
: Your preferred email for registration and recovery.--agree-tos
: This specifies that you agree to ACME’s Subscriber Agreement.--no-eff-email
: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.--staging
: This tells Certbot that you would like to use Let’s Encrypt’s staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please read Let’s Encrypt’s rate limits documentation.-d
: This allows you to specify domain names you would like to apply to your request. In this case, you’ve included your_domain
and www.your_domain
. Be sure to replace these with your own domains.As a final step, add the volume and network definitions. Be sure to replace the username here with your own non-root user:
...
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/sammy/node_project/views/
o: bind
networks:
app-network:
driver: bridge
Your named volumes include your Certbot certificate and working directory volumes, and the volume for your site’s static assets, web-root
. In most cases, the default driver for Docker volumes is the local
driver, which on Linux accepts options similar to the mount
command. Thanks to this, you are able to specify a list of driver options with driver_opts
that mount the views
directory on the host, which contains your application’s static assets, to the volume at runtime. The directory contents can then be shared between containers. For more information about the contents of the views
directory, please read Step 2 of How To Build a Node.js Application with Docker.
The following is the complete docker-compose.yml
file:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
networks:
- app-network
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email sammy@your_domain --agree-tos --no-eff-email --staging -d your_domain -d www.your_domain
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/sammy/node_project/views/
o: bind
networks:
app-network:
driver: bridge
With the service definitions in place, you are ready to start the containers and test your certificate requests.
You can start the containers with docker-compose up
. This will create and run your containers and services in the order you have specified. Once your domain requests succeed, your certificates will be mounted to the /etc/letsencrypt/live
folder on the webserver
container.
Create the services with docker-compose up
with the -d
flag, which will run the nodejs
and webserver
containers in the background:
- docker-compose up -d
Your output will confirm that your services have been created:
OutputCreating nodejs ... done
Creating webserver ... done
Creating certbot ... done
Use docker-compose ps
to check the status of your services:
- docker-compose ps
If everything was successful, your nodejs
and webserver
services will be Up
and the certbot
container will have exited with a 0
status message:
Output Name Command State Ports
------------------------------------------------------------------------
certbot certbot certonly --webroot ... Exit 0
nodejs node app.js Up 8080/tcp
webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp
If you notice anything other than Up
in the State
column for the nodejs
and webserver
services, or an exit status other than 0
for the certbot
container, be sure to check the service logs with the docker-compose logs
command. For example, if you wanted to check the Certbot log, you would run:
- docker-compose logs certbot
You can now check that your credentials have been mounted to the webserver
container with docker-compose exec
:
- docker-compose exec webserver ls -la /etc/letsencrypt/live
Once your request succeeds, your output will reveal the following:
Outputtotal 16
drwx------ 3 root root 4096 Dec 23 16:48 .
drwxr-xr-x 9 root root 4096 Dec 23 16:48 ..
-rw-r--r-- 1 root root 740 Dec 23 16:48 README
drwxr-xr-x 2 root root 4096 Dec 23 16:48 your_domain
Now that you know your request will be successful, you can edit the certbot
service definition to remove the --staging
flag.
Open the docker-compose.yml
file:
- nano docker-compose.yml
Find the section of the file with the certbot
service definition, and replace the --staging
flag in the command
option with the --force-renewal
flag. This will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot
service definition should have the following definitions:
...
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email sammy@your_domain --agree-tos --no-eff-email --force-renewal -d your_domain -d www.your_domain
...
When you’re done editing, save and exit the file. You can now run docker-compose up
to recreate the certbot
container and its relevant volumes. By including the --no-deps
option, you’re telling Compose that it can skip starting the webserver
service, since it is already running:
- docker-compose up --force-recreate --no-deps certbot
The following output indicates that your certificate request was successful:
OutputRecreating certbot ... done
Attaching to certbot
certbot | Account registered.
certbot | Renewing an existing certificate for your_domain and www.your_domain
certbot |
certbot | Successfully received certificate.
certbot | Certificate is saved at: /etc/letsencrypt/live/your_domain/fullchain.pem
certbot | Key is saved at: /etc/letsencrypt/live/your_domain phd.com/privkey.pem
certbot | This certificate expires on 2022-11-03.
certbot | These files will be updated when the certificate renews.
certbot | NEXT STEPS:
certbot | - The certificate will need to be renewed before it expires. Cert bot can automatically renew the certificate in the background, but you may need to take steps to enable that functionality. See https://certbot.org/renewal-setu p for instructions.
certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log
certbot |
certbot | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
certbot | If you like Certbot, please consider supporting our work by:
certbot | * Donating to ISRG / Let's Encrypt: https://letsencrypt.org/do nate
certbot | * Donating to EFF: https://eff.org/donate-le
certbot | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
certbot exited with code 0
With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.
Enabling SSL in your Nginx configuration will involve adding an HTTP redirect to HTTPS and specifying your SSL certificate and key locations. It will also involve specifying the Diffie-Hellman group, which you will use for Perfect Forward Secrecy.
Since you are going to recreate the webserver
service to include these additions, you can stop it now:
- docker-compose stop webserver
Next, create a directory in your current project directory for the Diffie-Hellman key:
- mkdir dhparam
Generate your key with the openssl
command:
- sudo openssl dhparam -out /home/sammy/node_project/dhparam/dhparam-2048.pem 2048
It will take a few moments to generate the key.
To add the relevant Diffie-Hellman and SSL information to your Nginx configuration, first remove the Nginx configuration file you created earlier:
- rm nginx-conf/nginx.conf
Open another version of the file:
- nano nginx-conf/nginx.conf
Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace your_domain
with your own domain:
server {
listen 80;
listen [::]:80;
server_name your_domain www.your_domain;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name your_domain www.your_domain;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/your_domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri @nodejs;
}
location @nodejs {
proxy_pass http://nodejs:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
#add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge
directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.
The HTTPS server block enables ssl
and http2
. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please read the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04. This block also includes a series of options to ensure that you are using the most up-to-date SSL protocols and ciphers and that OSCP stapling is turned on. OSCP stapling allows you to offer a time-stamped response from your certificate authority during the initial TLS handshake, which can speed up the authentication process.
The block also specifies your SSL and Diffie-Hellman credentials and key locations.
Finally, you’ve moved the proxy pass information to this block, including a location block with a try_files
directive, pointing requests to your aliased Node.js application container, and a location block for that alias, which includes security headers that will enable you to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options
, X-Content-Type-Options
, Referrer Policy
, Content-Security-Policy
, and X-XSS-Protection
. The HTTP Strict Transport Security
(HSTS) header is commented out — enable this only if you understand the implications and have assessed its “preload” functionality.
Once you have finished editing, save and close the file.
Before recreating the webserver
service, you need to add a few things to the service definition in your docker-compose.yml
file, including relevant port information for HTTPS and a Diffie-Hellman volume definition.
Open the file:
- nano docker-compose.yml
In the webserver
service definition, add the following port mapping and the dhparam
named volume:
...
webserver:
image: nginx:latest
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- nodejs
networks:
- app-network
Next, add the dhparam
volume to your volumes
definitions. Remember to replace the sammy
and node_project
directories to match yours:
...
volumes:
...
webroot:
...
dhparam:
driver: local
driver_opts:
type: none
device: /home/sammy/node_project/dhparam/
o: bind
Similarly to the web-root
volume, the dhparam
volume will mount the Diffie-Hellman key stored on the host to the webserver
container.
Save and close the file when you are finished editing.
Recreate the webserver
service:
- docker-compose up -d --force-recreate --no-deps webserver
Check your services with docker-compose ps
:
- docker-compose ps
The following output indicates that your nodejs
and webserver
services are running:
Output Name Command State Ports
----------------------------------------------------------------------------------------------
certbot certbot certonly --webroot ... Exit 0
nodejs node app.js Up 8080/tcp
webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
Finally, you can visit your domain to ensure that everything is working as expected. Navigate your browser to https://your_domain
, making sure to substitute your_domain
with your own domain name:
A lock icon should appear in your browser’s security indicator. If you would like to, you can navigate to the SSL Labs Server Test landing page or the Security Headers server test landing page. The configuration options included should earn your site an A rating on the SSL Labs Server Test. In order to get an A rating on the Security Headers server test, you would have to uncomment the Strict Transport Security (HSTS) header in your nginx-conf/nginx.conf
file:
…
location @nodejs {
proxy_pass http://nodejs:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
…
Again, enable this option only if you understand the implications and have assessed its “preload” functionality.
Let’s Encrypt certificates are valid for 90 days. You can set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron
scheduling utility. You can schedule a cron
job using a script that will renew your certificates and reload your Nginx configuration.
Open a script called ssl_renew.sh
in your project directory:
- nano ssl_renew.sh
Add the following code to the script to renew your certificates and reload your web server configuration:
#!/bin/bash
COMPOSE="/usr/local/bin/docker-compose --ansi never"
DOCKER="/usr/bin/docker"
cd /home/sammy/node_project/
$COMPOSE run certbot renew --dry-run && $COMPOSE kill -s SIGHUP webserver
$DOCKER system prune -af
This script first assigns the docker-compose
binary to a variable called COMPOSE
, and specifies the --no-ansi
option, which will run docker-compose
commands without ANSI control characters. It then does the same with the docker
binary. Finally, it changes to the ~/node_project
directory and runs the following docker-compose
commands:
docker-compose run
: This will start a certbot
container and override the command
provided in the certbot
service definition. Instead of using the certonly
subcommand use the renew
subcommand, which will renew certificates that are close to expiring. Also included is the --dry-run
option to test the script.docker-compose kill
: This will send a SIGHUP
signal to the webserver
container to reload the Nginx configuration.It then runs docker system prune
to remove all unused containers and images.
Close the file when you are finished editing, then make it executable:
- chmod +x ssl_renew.sh
Next, open your root crontab
file to run the renewal script at a specified interval:
sudo crontab -e
If this is your first time editing this file, you will be asked to choose an editor:
no crontab for root - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/ed
2. /bin/nano <---- easiest
3. /usr/bin/vim.basic
4. /usr/bin/vim.tiny
Choose 1-4 [2]:
...
At the end of the file, add the following line:
...
*/5 * * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
This will set the job interval to every five minutes, so you can test whether your renewal request has worked as intended. You have also created a log file, cron.log
, to record relevant output from the job.
After five minutes, check cron.log
to confirm whether the renewal request has succeeded:
- tail -f /var/log/cron.log
After a few moments, the following output signals a successful renewal:
Output- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/your_domain/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates above have not been saved.)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Killing webserver ... done
Output…
Congratulations, all simulated renewals succeeded:
/etc/letsencrypt/live/your_domain/fullchain.pem (success)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Killing webserver ...
Killing webserver ... done
Deleted Containers:
00cad94050985261e5b377de43e314b30ad0a6a724189753a9a23ec76488fd78
Total reclaimed space: 824.5kB
Exit out by entering CTRL + C
in your terminal.
You can now modify the crontab
file to set a daily interval. To run the script every day at noon, for example, you could modify the last line of the file like the following:
...
0 12 * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
You can also remove the --dry-run
option from your ssl_renew.sh
script:
#!/bin/bash
COMPOSE="/usr/local/bin/docker-compose --no-ansi"
DOCKER="/usr/bin/docker"
cd /home/sammy/node_project/
$COMPOSE run certbot renew && $COMPOSE kill -s SIGHUP webserver
$DOCKER system prune -af
Your cron
job will ensure that your Let’s Encrypt certificates don’t lapse by renewing them when they are eligible. You can also set up log rotation with the Logrotate utility to rotate and compress your log files.
You have used containers to set up and run a Node application with an Nginx reverse proxy. You have also secured SSL certificates for your application’s domain and set up a cron
job to renew these certificates when necessary.
If you are interested in learning more about Let’s Encrypt plugins, please review our articles on using the Nginx plugin or the standalone plugin.
You can also learn more about Docker Compose with the following resources:
The Compose documentation is also a great resource for learning more about multi-container applications.
]]>Node.js runs JavaScript code in a single thread, which means that your code can only do one task at a time. However, Node.js itself is multithreaded and provides hidden threads through the libuv
library, which handles I/O operations like reading files from a disk or network requests. Through the use of hidden threads, Node.js provides asynchronous methods that allow your code to make I/O requests without blocking the main thread.
Although Node.js has hidden threads, you cannot use them to offload CPU-intensive tasks, such as complex calculations, image resizing, or video compression. Since JavaScript is single-threaded when a CPU-intensive task runs, it blocks the main thread and no other code executes until the task completes. Without using other threads, the only way to speed up a CPU-bound task is to increase the processor speed.
However, in recent years, CPUs haven’t been getting faster. Instead, computers are shipping with extra cores, and it’s now more common for computers to have 8 or more cores. Despite this trend, your code will not take advantage of the extra cores on your computer to speed up CPU-bound tasks or avoid breaking the main thread because JavaScript is single-threaded.
To remedy this, Node.js introduced the worker-threads
module, which allows you to create threads and execute multiple JavaScript tasks in parallel. Once a thread finishes a task, it sends a message to the main thread that contains the result of the operation so that it can be used with other parts of the code. The advantage of using worker threads is that CPU-bound tasks don’t block the main thread and you can divide and distribute a task to multiple workers to optimize it.
In this tutorial, you’ll create a Node.js app with a CPU-intensive task that blocks the main thread. Next, you will use the worker-threads
module to offload the CPU-intensive task to another thread to avoid blocking the main thread. Finally, you will divide the CPU-bound task and have four threads work on it in parallel to speed up the task.
To complete this tutorial, you will need:
A multi-core system with four or more cores. You can still follow the tutorial from Steps 1 through 6 on a dual-core system. However, Step 7 requires four cores to see the performance improvements.
A Node.js development environment. If you’re on Ubuntu 22.04, install the recent version of Node.js by following step 3 of How To Install Node.js on Ubuntu 22.04. If you’re on another operating system, see How to Install Node.js and Create a Local Development Environment.
A good understanding of the event loop, callbacks, and promises in JavaScript, which you can find in our tutorial, Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript.
Basic knowledge of how to use the Express web framework. Check out our guide, How To Get Started with Node.js and Express.
In this step, you’ll create the project directory, initialize npm
, and install all the necessary dependencies.
To begin, create and move into the project directory:
- mkdir multi-threading_demo
- cd multi-threading_demo
The mkdir
command creates a directory and the cd
command changes the working directory to the newly created one.
Following this, initialize the project directory with npm using the npm init
command:
- npm init -y
The -y
option accepts all the default options.
When the command runs, your output will look similar to this:
Wrote to /home/sammy/multi-threading_demo/package.json:
{
"name": "multi-threading_demo",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Next, install express
, a Node.js web framework:
- npm install express
You will use Express to create a server application that has blocking and non-blocking endpoints.
Node.js ships with the worker-threads
module by default, so you don’t need to install it.
You’ve now installed the necessary packages. Next, you’ll learn more about processes and threads and how they execute on a computer.
Before you start writing CPU-bound tasks and offloading them to separate threads, you first need to understand what processes and threads are, and the differences between them. Most importantly, you’ll review how the processes and threads execute on a single or multi-core computer system.
A process is a running program in the operating system. It has its own memory and cannot see nor access the memory of other running programs. It also has an instruction pointer, which indicates the instruction currently being executed in a program. Only one task can be executed at a time.
To understand this, you will create a Node.js program with an infinite loop so that it doesn’t exit when run.
Using nano
, or your preferred text editor, create and open the process.js
file:
- nano process.js
In your process.js
file, enter the following code:
const process_name = process.argv.slice(2)[0];
count = 0;
while (true) {
count++;
if (count == 2000 || count == 4000) {
console.log(`${process_name}: ${count}`);
}
}
In the first line, the process.argv
property returns an array containing the program command-line arguments. You then attach JavaScript’s slice()
method with an argument of 2
to make a shallow copy of the array from index 2 onwards. Doing so skips the first two arguments, which are the Node.js path and the program filename. Next, you use the bracket notation syntax to retrieve the first argument from the sliced array and store it in the process_name
variable.
After that, you define a while
loop and pass it a true
condition to make the loop run forever. Within the loop, the count
variable is incremented by 1
during each iteration. Following this is an if
statement that checks whether count
is equal to 2000
or 4000
. If the condition evaluates to true, console.log()
method logs a message in the terminal.
Save and close your file using CTRL+X
, then press Y
to save the changes.
Run the program using the node
command:
- node process.js A &
A
is a command-line argument that is passed to the program and stored in the process_name
variable. The &
at end the allows the Node program to run in the background, which lets you enter more commands in the shell.
When you run the program, you will see output similar to the following:
Output[1] 7754
A: 2000
A: 4000
The number 7754
is a process ID that the operating system assigned to it. A: 2000
and A: 4000
are the program’s output.
When you run a program using the node
command, you create a process. The operating system allocates memory for the program, locates the program executable on your computer’s disk, and loads the program into memory. It then assigns it a process ID and begins executing the program. At that point, your program has now become a process.
When the process is running, its process ID is added to the process list of the operating system and can be seen with tools like htop
, top
, or ps
. The tools provide more details about the processes, as well as options to stop or prioritize them.
To get a quick summary of a Node process, press ENTER
in your terminal to get the prompt back. Next, run the ps
command to see the Node processes:
- ps |grep node
The ps
command lists all processes associated with the current user on the system. The pipe operator |
to pass all the ps
output to the grep
filters the processes to list only Node processes.
Running the command will yield output similar to the following:
Output7754 pts/0 00:21:49 node
You can create countless processes out of a single program. For example, use the following command to create three more processes with different arguments and put them in the background:
- node process.js B & node process.js C & node process.js D &
In the command, you created three more instances of the process.js
program. The &
symbol puts each process in the background.
Upon running the command, the output will look similar to the following (although the order might differ):
Output[2] 7821
[3] 7822
[4] 7823
D: 2000
D: 4000
B: 2000
B: 4000
C: 2000
C: 4000
As you can see in the output, each process logged the process name into the terminal when the count reached 2000
and 4000
. Each process is not aware of any other process running: process D
isn’t aware of process C
, and vice versa. Anything that happens in either process will not affect other Node.js processes.
If you examine the output closely, you will see that the order of the output isn’t the same order you had when you created the three processes. When running the command, the processes arguments were in order of B
, C
, and D
. But now, the order is D
, B
, and C
. The reason is that the OS has scheduling algorithms that decide which process to run on the CPU at a given time.
On a single core machine, the processes execute concurrently. That is, the operating system switches between the processes in regular intervals. For example, process D
executes for a limited time, then its state is saved somewhere and the OS schedules process B
to execute for a limited time, and so on. This happens back and forth until all the tasks have been finished. From the output, it might look like each process has run to completion, but in reality, the OS scheduler is constantly switching between them.
On a multi-core system—assuming you have four cores—the OS schedules each process to execute on each core at the same time. This is known as parallelism. However, if you create four more processes (bringing the total to eight), each core will execute two processes concurrently until they are finished.
Threads are like processes: they have their own instruction pointer and can execute one JavaScript task at a time. Unlike processes, threads do not have their own memory. Instead, they reside within a process’s memory. When you create a process, it can have multiple threads created with the worker_threads
module executing JavaScript code in parallel. Furthermore, threads can communicate with one another through message passing or sharing data in the process’s memory. This makes them lightweight in comparison to processes, since spawning a thread does not ask for more memory from the operating system.
When it comes to the execution of threads, they have similar behavior to that of processes. If you have multiple threads running on a single core system, the operating system will switch between them in regular intervals, giving each thread a chance to execute directly on the single CPU. On a multi-core system, the OS schedules the threads across all cores and executes the JavaScript code at the same time. If you end up creating more threads than there are cores available, each core will execute multiple threads concurrently.
With that, press ENTER
, then stop all the currently running Node processes with the kill
command:
- sudo kill -9 `pgrep node`
pgrep
returns the process ID’s of all the four Node processes to the kill
command. The -9
option instructs kill
to send a SIGKILL signal.
When you run the command, you will see output similar to the following:
Output[1] Killed node process.js A
[2] Killed node process.js B
[3] Killed node process.js C
[4] Killed node process.js D
Sometimes the output might be delayed and show up when you run another command later.
Now that you know the difference between a process and a thread, you’ll work with Node.js hidden threads in the next section.
Node.js does provide extra threads, which is why it’s considered to be multithreaded. In this section, you’ll examine hidden threads in Node.js, which help make I/O operations non-blocking.
As mentioned in the introduction, JavaScript is single-threaded and all the JavaScript code executes in a single thread. This includes your program source code and third-party libraries that you include in your program. When a program makes an I/O operation to read a file or a network request, this blocks the main thread.
However, Node.js implements the libuv
library, which provides four extra threads to a Node.js process. With these threads, the I/O operations are handled separately and when they are finished, the event loop adds the callback associated with the I/O task in a microtask queue. When the call stack in the main thread is clear, the callback is pushed on the call stack and then it executes. To make this clear, the callback associated with the given I/O task does not execute in parallel; however, the task itself of reading a file or a network request happens in parallel with the help of the threads. Once the I/O task finishes, the callback runs in the main thread.
In addition to these four threads, the V8 engine, also provides two threads for handling things like automatic garbage collection. This brings the total number of threads in a process to seven: one main thread, four Node.js threads, and two V8 threads.
To confirm that every Node.js process has seven threads, run the process.js
file again and put it in the background:
- node process.js A &
The terminal will log the process ID, as well as output from the program:
Output[1] 9933
A: 2000
A: 4000
Note the process ID somewhere and press ENTER
so that you can use the prompt again.
To see the threads, run the top
command and pass it the process ID displayed in the output:
- top -H -p 9933
-H
instructs top
to display threads in a process. The -p
flag instructs top
to monitor only the activity in the given process ID.
When you run the command, your output will look similar to the following:
Outputtop - 09:21:11 up 15:00, 1 user, load average: 0.99, 0.60, 0.26
Threads: 7 total, 1 running, 6 sleeping, 0 stopped, 0 zombie
%Cpu(s): 24.8 us, 0.3 sy, 0.0 ni, 75.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 7951.2 total, 6756.1 free, 248.4 used, 946.7 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 7457.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9933 node-us+ 20 0 597936 51864 33956 R 99.9 0.6 4:19.64 node
9934 node-us+ 20 0 597936 51864 33956 S 0.0 0.6 0:00.00 node
9935 node-us+ 20 0 597936 51864 33956 S 0.0 0.6 0:00.84 node
9936 node-us+ 20 0 597936 51864 33956 S 0.0 0.6 0:00.83 node
9937 node-us+ 20 0 597936 51864 33956 S 0.0 0.6 0:00.93 node
9938 node-us+ 20 0 597936 51864 33956 S 0.0 0.6 0:00.83 node
9939 node-us+ 20 0 597936 51864 33956 S 0.0 0.6 0:00.00 node
As you can see in the output, the Node.js process has seven threads in total: one main thread for executing JavaScript, four Node.js threads, and two V8 threads.
As discussed previously, the four Node.js threads are used for I/O operations to make them non-blocking. They work well for that task, and creating threads yourself for I/O operations may even worsen your application performance. The same cannot be said about CPU-bound tasks. A CPU-bound task does not make use of any extra threads available in the process and blocks the main thread.
Now press q
to exit top
and stop the Node process with the following command:
- kill -9 9933
Now that you know about the threads in a Node.js process, you will write a CPU-bound task in the next section and observe how it affects the main thread.
In this section, you will build an Express app that has a non-blocking route and a blocking route that runs a CPU-bound task.
First, open index.js
in your preferred editor:
- nano index.js
In your index.js
file, add the following code to create a basic server:
const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.get("/non-blocking/", (req, res) => {
res.status(200).send("This page is non-blocking");
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
In the proceeding code block, you create an HTTP server using Express. In the first line, you import the express
module. Next, you set the app
variable to hold an instance of Express. After that, you define the port
variable, which holds the port number the server should listen on.
Following this, you use app.get('/non-blocking')
to define the route GET
requests should be sent. Finally, you invoke the app.listen()
method to instruct the server to start listening on port 3000
.
Next, define another route, /blocking/
, which will contain a CPU-intensive task:
...
app.get("/blocking", async (req, res) => {
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
res.status(200).send(`result is ${counter}`);
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
You define the /blocking
route using app.get("/blocking")
, which takes an asynchronous callback prefixed with the async
keyword as a second argument that runs a CPU-intensive task. Within the callback, you create a for
loop that iterates 20 billion times and during each iteration, it increments the counter
variable by 1
. This task runs on the CPU and will take a couple of seconds to complete.
At this point, your index.js
file will now look like this:
const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.get("/non-blocking/", (req, res) => {
res.status(200).send("This page is non-blocking");
});
app.get("/blocking", async (req, res) => {
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
res.status(200).send(`result is ${counter}`);
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Save and exit your file, then start the server with the following command:
- node index.js
When you run the command, you will see output similar to the following:
OutputApp listening on port 3000
This shows that the server is running and ready to serve.
Now, visit http://localhost:3000/non-blocking
in your preferred browser. You will see an instant response with the message This page is non-blocking
.
Note: If you are following the tutorial on a remote server, you can use port forwarding to test the app in the browser.
While the Express server is still running, open another terminal on your local computer and enter the following command:
- ssh -L 3000:localhost:3000 your-non-root-user@yourserver-ip
Upon connecting to the server, navigate to http://localhost:3000/non-blocking
on your local machine’s web browser. Keep the second terminal open throughout the remainder of this tutorial.
Next, open a new tab and visit http://localhost:3000/blocking
. As the page loads, quickly open two more tabs and visit http://localhost:3000/non-blocking
again. You will see that you won’t get an instant response, and the pages will keep trying to load. It is only after the /blocking
route finishes loading and returns a response result is 20000000000
that the rest of the routes will return a response.
The reason why all the /non-blocking
routes don’t work as the /blocking
route loads is because of the CPU-bound for
loop, which blocks the main thread. When the main thread is blocked, Node.js cannot serve any requests until the CPU-bound task has finished. So if your application has thousands of simultaneous GET
requests to the /non-blocking
route, a single visit to the /blocking
route is all it takes to make all the application routes non-responsive.
As you can see, blocking the main thread can harm the user’s experience with your app. To solve this issue, you will need to offload the CPU-bound task to another thread so that the main thread can continue handling other HTTP requests.
With that, stop the server by pressing CTRL+C
. You will start the server again in the next section after making more changes to the index.js
file. The reason why the server is stopped is that Node.js does not automatically refresh when new changes to the file are made.
Now that you understand the negative impact a CPU-intensive task can have on your application, you will now try to avoid blocking the main thread by using promises.
Often when developers learn about the blocking effect from CPU-bound tasks, they turn to promises to make the code non-blocking. This instinct stems from the knowledge of using non-blocking promise-based I/O methods, such as readFile()
and writeFile()
. But as you have learned, the I/O operations make use of Node.js hidden threads, which CPU-bound tasks do not. Nevertheless, in this section, you will wrap the CPU-bound task in a promise as an attempt to make it non-blocking. It won’t work, but it will help you to see the value of using worker threads, which you will do in the next section.
Open the index.js
file again in your editor:
- nano index.js
In your index.js
file, remove the highlighted code containing the CPU-intensive task:
...
app.get("/blocking", async (req, res) => {
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
res.status(200).send(`result is ${counter}`);
});
...
Next, add the following highlighted code containing a function that returns a promise:
...
function calculateCount() {
return new Promise((resolve, reject) => {
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
resolve(counter);
});
}
app.get("/blocking", async (req, res) => {
res.status(200).send(`result is ${counter}`);
}
The calculateCount()
function now contains the calculations you had in the /blocking
handler function. The function returns a promise, which is initialized with the new Promise
syntax. The promise takes a callback with resolve
and reject
parameters, which handle success or failure. When the for
loop finishes running, the promise resolves with the value in the counter
variable.
Next, call the calculateCount()
function in the /blocking/
handler function in the index.js
file:
app.get("/blocking", async (req, res) => {
const counter = await calculateCount();
res.status(200).send(`result is ${counter}`);
});
Here you call the calculateCount()
function with the await
keyword prefixed to wait for the promise to resolve. Once the promise resolves, the counter
variable is set to the resolved value.
Your complete code will now look like the following:
const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.get("/non-blocking/", (req, res) => {
res.status(200).send("This page is non-blocking");
});
function calculateCount() {
return new Promise((resolve, reject) => {
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
resolve(counter);
});
}
app.get("/blocking", async (req, res) => {
const counter = await calculateCount();
res.status(200).send(`result is ${counter}`);
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Save and exit your file, then start the server again:
- node index.js
In your web browser, visit http://localhost:3000/blocking
and as it loads, quickly reload the http://localhost:3000/non-blocking
tabs. As you will notice, the non-blocking
routes are still affected and they will all wait for the /blocking
route to finish loading. Because the routes are still affected, promises don’t make JavaScript code execute in parallel and cannot be used to make CPU-bound tasks non-blocking.
With that, stop the application server with CTRL+C
.
Now that you know promises do not provide any mechanism to make CPU-bound tasks non-blocking, you will use the Node.js worker-threads
module to offload a CPU-bound task into a separate thread.
worker-threads
ModuleIn this section, you will offload a CPU-intensive task to another thread using the worker-threads
module to avoid blocking the main thread. To do this, you will create a worker.js
file that will contain the CPU-intensive task. In the index.js
file, you will use the worker-threads
module to initialize the thread and start the task in the worker.js
file to run in parallel to the main thread. Once the task completes, the worker thread will send a message containing the result back to the main thread.
To begin, verify that you have 2 or more cores using the nproc
command:
- nproc
Output4
If it shows two or more cores, you can proceed with this step.
Next, create and open the worker.js
file in your text editor:
- nano worker.js
In your worker.js
file, add the following code to import the worker-threads
module and do the CPU-intensive task:
const { parentPort } = require("worker_threads");
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
The first line loads the worker_threads
module and extracts the parentPort
class. The class provides methods you can use to send messages to the main thread. Next, you have the CPU-intensive task that is currenty in the calculateCount()
function in the index.js
file. Later in this step, you will delete this function from index.js
.
Following this, add the highlighted code below:
const { parentPort } = require("worker_threads");
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
parentPort.postMessage(counter);
Here you invoke the postMessage()
method of the parentPort
class, which sends a message to the main thread containing the result of the CPU-bound task stored in the counter
variable.
Save and exit your file. Open index.js
in your text editor:
- nano index.js
Since you already have the CPU-bound task in worker.js
, remove the highlighted code from index.js
:
const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.get("/non-blocking/", (req, res) => {
res.status(200).send("This page is non-blocking");
});
function calculateCount() {
return new Promise((resolve, reject) => {
let counter = 0;
for (let i = 0; i < 20_000_000_000; i++) {
counter++;
}
resolve(counter);
});
}
app.get("/blocking", async (req, res) => {
const counter = await calculateCount();
res.status(200).send(`result is ${counter}`);
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Next, in the app.get("/blocking")
callback, add the following code to initialize the thread:
const express = require("express");
const { Worker } = require("worker_threads");
...
app.get("/blocking", async (req, res) => {
const worker = new Worker("./worker.js");
worker.on("message", (data) => {
res.status(200).send(`result is ${data}`);
});
worker.on("error", (msg) => {
res.status(404).send(`An error occurred: ${msg}`);
});
});
...
First, you import the worker_threads
module and unpack the Worker
class. Within the app.get("/blocking")
callback, you create an instance of the Worker
using the new
keyword that is followed by a call to Worker
with the worker.js
file path as its argument. This creates a new thread and the code in the worker.js
file starts running in the thread on another core.
Following this, you attach an event to the worker
instance using the on("message")
method to listen to the message event. When the message is received containing the result from the worker.js
file, it is passed as a parameter to the method’s callback, which returns a response to the user containing the result of the CPU-bound task.
Next, you attach another event to the worker instance using the on("error")
method to listen to the error event. If an error occurs, the callback returns a 404
response containing the error message back to the user.
Your complete file will now look like the following:
const express = require("express");
const { Worker } = require("worker_threads");
const app = express();
const port = process.env.PORT || 3000;
app.get("/non-blocking/", (req, res) => {
res.status(200).send("This page is non-blocking");
});
app.get("/blocking", async (req, res) => {
const worker = new Worker("./worker.js");
worker.on("message", (data) => {
res.status(200).send(`result is ${data}`);
});
worker.on("error", (msg) => {
res.status(404).send(`An error occurred: ${msg}`);
});
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Save and exit your file, then run the server:
- node index.js
Visit the http://localhost:3000/blocking
tab again in your web browser. Before it finishes loading, refresh all http://localhost:3000/non-blocking
tabs. You should now notice that they are loading instantly without waiting for the /blocking
route to finish loading. This is because the CPU-bound task is offloaded to another thread, and the main thread handles all the incoming requests.
Now, stop your server using CTRL+C
.
Now that you can make a CPU-intensive task non-blocking using a worker thread, you’ll use four worker threads to improve the performance of the CPU-intensive task.
In this section, you will divide the CPU-intensive task among four worker threads so that they can finish the task faster and shorten the load time of the /blocking
route.
To have more worker threads work on the same task, you will need to split the tasks. Since the task involves looping 20 billion times, you will divide 20 billion with the number of threads you want to use. In this case, it is 4
. Computing 20_000_000_000 / 4
will result in 5_000_000_000
. So each thread will loop from 0
to 5_000_000_000
and increment counter
by 1
. When each thread finishes, it will send a message to the main thread containing the result. Once the main thread receives messages from all the four threads separately, you will combine the results and send a response to the user.
You can also use the same approach if you have a task that iterates over large arrays. For example, if you wanted to resize 800 images in a directory, you can create an array containing all the image file paths. Next, divide 800
by 4
(the thread count) and have each thread work on a range. Thread one will resize images from the array index 0
to 199
, thread two from index 200
to 399
, and so on.
First, verify that you have four or more cores:
- nproc
Output4
Make a copy of the worker.js
file using the cp
command:
- cp worker.js four_workers.js
The current index.js
and worker.js
files will be left intact so that you can run them again to compare their performance with changes in this section later.
Next, open the four_workers.js
file in your text editor:
- nano four_workers.js
In your four_workers.js
file, add the highlighted code to import the workerData
object:
const { workerData, parentPort } = require("worker_threads");
let counter = 0;
for (let i = 0; i < 20_000_000_000 / workerData.thread_count; i++) {
counter++;
}
parentPort.postMessage(counter);
First, you extract the WorkerData
object, which will contain the data passed from the main thread when the thread is initialized (which you will do soon in the index.js
file). The object has a thread_count
property that contains the number of threads, which is 4
. Next in the for
loop, the value 20_000_000_000
is divided by 4
, resulting in 5_000_000_000
.
Save and close your file, then copy the index.js
file:
- cp index.js index_four_workers.js
Open the index_four_workers.js
file in your editor:
- nano index_four_workers.js
In your index_four_workers.js
file, add the highlighted code to create a thread instance:
...
const app = express();
const port = process.env.PORT || 3000;
const THREAD_COUNT = 4;
...
function createWorker() {
return new Promise(function (resolve, reject) {
const worker = new Worker("./four_workers.js", {
workerData: { thread_count: THREAD_COUNT },
});
});
}
app.get("/blocking", async (req, res) => {
...
})
...
First, you define the THREAD_COUNT
constant containing the number of threads you want to create. Later when you have more cores on your server, scaling will involve changing the value of the THREAD_COUNT
to the number of threads you want to use.
Next, the createWorker()
function creates and returns a promise. Within the promise callback, you initialize a new thread by passing the Worker
class the file path to the four_workers.js
file as the first argument. You then pass an object as the second argument. Next, you assign the object the workerData
property that has another object as its value. Finally, you assign the object the thread_count
property whose value is the number of threads in the THREAD_COUNT
constant. The workerData
object is the one you referenced in the workers.js
file earlier.
To make sure the promise resolves or throws an error, add the following highlighted lines:
...
function createWorker() {
return new Promise(function (resolve, reject) {
const worker = new Worker("./four_workers.js", {
workerData: { thread_count: THREAD_COUNT },
});
worker.on("message", (data) => {
resolve(data);
});
worker.on("error", (msg) => {
reject(`An error ocurred: ${msg}`);
});
});
}
...
When the worker thread sends a message to the main thread, the promise resolves with the data returned. However, if an error occurs, the promise returns an error message.
Now that you have defined the function that initializes a new thread and returns the data from the thread, you’ll use the function in app.get("/blocking")
to spawn new threads.
But first, remove the following highlighted code, since you have already defined this functionality in the createWorker()
function:
...
app.get("/blocking", async (req, res) => {
const worker = new Worker("./worker.js");
worker.on("message", (data) => {
res.status(200).send(`result is ${data}`);
});
worker.on("error", (msg) => {
res.status(404).send(`An error ocurred: ${msg}`);
});
});
...
With the code deleted, add the following code to initialize four work threads:
...
app.get("/blocking", async (req, res) => {
const workerPromises = [];
for (let i = 0; i < THREAD_COUNT; i++) {
workerPromises.push(createWorker());
}
});
...
First, you create a workerPromises
variable, which contains an empty array. Next, you iterate as many times as the value in THREAD_COUNT
, which is 4
. During each iteration, you call the createWorker()
function to create a new thread. You then push the promise object that the function returns into the workerPromises
array using JavaScript’s push
method. When the loop finishes, the workerPromises
will have four promise objects each returned from calling the createWorker()
function four times.
Now, add the following highlighted code below to wait for the promises to resolve and return a response to the user:
app.get("/blocking", async (req, res) => {
const workerPromises = [];
for (let i = 0; i < THREAD_COUNT; i++) {
workerPromises.push(createWorker());
}
const thread_results = await Promise.all(workerPromises);
const total =
thread_results[0] +
thread_results[1] +
thread_results[2] +
thread_results[3];
res.status(200).send(`result is ${total}`);
});
Since the workerPromises
array contain promises returned calling createWorker()
, you prefix the Promise.all()
method with the await
syntax and call the all()
method with workerPromises
as its argument. The Promise.all()
method waits for all promises in the array to resolve. When that happens, the thread_results
variable contains the values that the promises resolved. Since the calculations were split among four workers, you add them all together by getting each value from the thread_results
using the bracket notation syntax. Once added, you return the total value to the page.
Your complete file should now look like this:
const express = require("express");
const { Worker } = require("worker_threads");
const app = express();
const port = process.env.PORT || 3000;
const THREAD_COUNT = 4;
app.get("/non-blocking/", (req, res) => {
res.status(200).send("This page is non-blocking");
});
function createWorker() {
return new Promise(function (resolve, reject) {
const worker = new Worker("./four_workers.js", {
workerData: { thread_count: THREAD_COUNT },
});
worker.on("message", (data) => {
resolve(data);
});
worker.on("error", (msg) => {
reject(`An error ocurred: ${msg}`);
});
});
}
app.get("/blocking", async (req, res) => {
const workerPromises = [];
for (let i = 0; i < THREAD_COUNT; i++) {
workerPromises.push(createWorker());
}
const thread_results = await Promise.all(workerPromises);
const total =
thread_results[0] +
thread_results[1] +
thread_results[2] +
thread_results[3];
res.status(200).send(`result is ${total}`);
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Save and close your file. Before you run this file, first run index.js
to measure its response time:
- node index.js
Next, open a new terminal on your local computer and enter the following curl
command, which measures how long it takes to get a response from the /blocking
route:
- time curl --get http://localhost:3000/blocking
The time
command measures how long the curl
command runs. The curl
command sends an HTTP request to the given URL and the --get
option instructs curl
to make a GET
request.
When the command runs, your output will look similar to this:
Outputreal 0m28.882s
user 0m0.018s
sys 0m0.000s
The highlighted output shows that it takes about 28 seconds to get a response, which might vary on your computer.
Next, stop the server with CTRL+C
and run the index_four_workers.js
file:
- node index_four_workers.js
Visit the /blocking
route again in your second terminal:
- time curl --get http://localhost:3000/blocking
You will see output consistent with the following:
Outputreal 0m8.491s
user 0m0.011s
sys 0m0.005s
The output shows that it takes a about 8 seconds, which means you cut down the load time by roughly 70%.
You successfully optimized the CPU-bound task using four worker threads. If you have a machine with more than four cores, update the THREAD_COUNT
to that number and you will cut the load time even further.
In this article, you built a Node app with a CPU-bound task that blocks the main thread. You then tried to make the task non-blocking using promises, which was unsuccessful. After that, you used the worker_threads
module to offload the CPU-bound task to another thread to make it non-blocking. Finally, you used the worker_threads
module to create four threads to speed up the CPU-intensive task.
As a next step, see the Node.js Worker threads documentation to learn more about options. In addition, you can check out the piscina
library, which allows you to create a worker pool for your CPU-intensive tasks. If you want to continue learning Node.js, see the tutorial series, How To Code in Node.js.
require()
call. Before reading this post, please go through this post “Node JS Export and Import Modules” to know require() call usage.
In this post, we are going to discuss about Node JS Platform “fs” module. FS Stands for File System. This module is also known as IO or FileSystem or Stream module.
Node FS Module provides an API to interact with File System and to perform some IO Operations like create file, read File, delete file, update file etc. Like some Node modules for example “npm”, “http” etc, Node JS “fs” also comes with basic Node JS Platform. We don’t need to do anything to setup Node JS FS module.
We just need to import node fs module into our code and start writing IO Operations code. To import a node fs module;
var fs = require("fs");
This require() call imports Node JS “fs” module into cache and creates an object of type Node FS module. Once it’s done, we can perform any IO Operation using node fs object. Let us assume that our ${Eclipse_Workspace}
refers to D:\RamWorkspaces\NodeWorkSpace
. Now onwards, I’m going to use this variable to refer my Eclipse workspace. As a Java or DOT NET or C/C++ developers, we have already learned and wrote some IO Programs. IO or Streams are two types:
Now we will discuss about how to create a new file using Node JS FS API.
Create a Node JS Project in Eclipse IDE.
Copy package.json
file from previous examples and update the required things.
{
"name": "filesystem",
"version": "1.0.0",
"description": "File System Example",
"main": "filesystem",
"author": "JournalDEV",
"engines":{
"node":"*"
}
}
Create a JavaScript file with the following content; fs-create-file.js
/**
* Node FS Example
* Node JS Create File
*/
var fs = require("fs");
var createStream = fs.createWriteStream("JournalDEV.txt");
createStream.end();
Code Description: var fs = require("fs")
require() call loads specified Node FS module into cache and assign that to an object named as fs. fs.createWriteStream(filename)
call is used to create a Write Stream and file with given filename. createStream.end()
call ends or closes the opened stream.
Before executing fs-create-file.js
, first observe the filesystem project content and you will notice that “JournalDEV.txt” file is not available.
Open command prompt at ${Eclipse_Workspace}/filesystem
and run node commend to execute fs-create-file.js
file as shown in below image. Notice your project directory contents now, you will notice an empty file named “JournalDEV.txt”.
We will use Node FS API to create a new file and write some data into that. It is continuation to our previous example.
Remove previously created “JournalDEV.txt” from ${Eclipse_Workspace}/filesystem folder
Create a Java Script file with the following content: fs-write-file.js
/**
* Node FS Example
* Node JS Write to File
*/
var fs = require("fs");
var writeStream = fs.createWriteStream("JournalDEV.txt");
writeStream.write("Hi, JournalDEV Users. ");
writeStream.write("Thank You.");
writeStream.end();
createStream.write(sometext)
call is used to write some text to a file.
Open command prompt at ${Eclipse_Workspace}/filesystem
and run node commend to execute fs-write-file.js
file as shown below.
Go to ${Eclipse_Workspace}/filesystem folder and open “JournalDEV.txt” to verify its content. Now we have created a new file and write some data into that file.
We will use Node FS API to open and read an existing file content and write that content to the console. It is continuation to our previous examples. Here we are going to use named JavaScript function. Go through this example to understand this.
Create a Java Script file with the following content. fs-read-file1.js
/**
* Node FS Read File
* Node JS Read File
*/
var fs = require("fs");
function readData(err, data) {
console.log(data);
}
fs.readFile('JournalDEV.txt', 'utf8', readData);
Code Description: readData()
is a JavaScript function which takes two parameters;
This function takes data parameter and prints that data to a console. fs.readFile()
is Node JS FS API. It takes three parameters;
Open command prompt at ${Eclipse_Workspace}/filesystem and run node commend to execute fs-read-file1.js file. Now we have observed that our code has successfully open a file, reads its content and writes its content to the console.
Now we are going to use JavaScript anonymous function to read data from a file. Go through this example to understand this. It’s similar to previous fs-read-file1.js example only but with anonymous function. If you are not familiar with JavaScript anonymous functions, please go through some JavaScript tutorial and get some idea. Create a Java Script file with the following content; fs-read-file2.js
/**
* Node FS File System Module
* Node.js read file example
*/
var fs = require("fs");
fs.readFile('JournalDEV.txt', 'utf8', function(err, data) {
console.log(data);
});
Code Description: Here readFile()
is using JavaScript anonymous function to read data from a file and write that file content to the console. When we run this file, we will get same output as fs-read-file2.js example. Now remove “utf8” data format to see binary output.
/**
* Node FileSystem Module
* Node JS Read File Binary Data
*/
var fs = require("fs");
fs.readFile('JournalDEV.txt', function(err, data) {
console.log(data);
});
Execute above file and observe the output as shown in below image. Bonus TIP: To learn Node JS FS API in depth, please use Enide 2014 Studio and know all available functions and usage as shown below.
[anyFSAPIObject] + Press . (dot) + After dot Press (CTRL + Space Bar)
Now we are familiar with Node FS module. We are going to use this knowledge in next posts, especially in HTTP Module post. Reference: Official Documentation
]]>JSON Server is a Node Module that you can use to create demo rest json webservice in less than a minute. All you need is a JSON file for sample data.
You should have NPM installed on your machine. If not, then refer this post to install NPM. Below shows the one liner command to install json-server
with output on my machine.
$ npm install -g json-server
npm WARN deprecated graceful-fs@3.0.8: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
/usr/local/bin/json-server -> /usr/local/lib/node_modules/json-server/bin/index.js
- bytes@2.3.0 node_modules/json-server/node_modules/raw-body/node_modules/bytes
/usr/local/lib
└─┬ json-server@0.8.10
├─┬ body-parser@1.15.1
│ └── bytes@2.3.0
├─┬ compression@1.6.1
│ └── bytes@2.2.0
├─┬ lowdb@0.10.3
│ └─┬ steno@0.4.4
│ └── graceful-fs@4.1.4
├─┬ update-notifier@0.5.0
│ └─┬ configstore@1.4.0
│ ├── graceful-fs@4.1.4
│ └─┬ write-file-atomic@1.1.4
│ └── graceful-fs@4.1.4
└─┬ yargs@4.7.0
├─┬ pkg-conf@1.1.2
│ └─┬ load-json-file@1.1.0
│ └── graceful-fs@4.1.4
└─┬ read-pkg-up@1.0.1
└─┬ read-pkg@1.1.0
└─┬ path-type@1.1.0
└── graceful-fs@4.1.4
$
$ json-server -v
0.8.10
$ json-server -help
/usr/local/bin/json-server [options] <source>
Options:
--config, -c Path to config file [default: "json-server.json"]
--port, -p Set port [default: 3000]
--host, -H Set host [default: "0.0.0.0"]
--watch, -w Watch file(s) [boolean]
--routes, -r Path to routes file
--static, -s Set static files directory
--read-only, --ro Allow only GET requests [boolean]
--no-cors, --nc Disable Cross-Origin Resource Sharing [boolean]
--no-gzip, --ng Disable GZIP Content-Encoding [boolean]
--snapshots, -S Set snapshots directory [default: "."]
--delay, -d Add delay to responses (ms)
--id, -i Set database id property (e.g. _id) [default: "id"]
--quiet, -q Suppress log messages from output [boolean]
$
Now it’s time to start our json-server. Below is a sample file with my employees json data.
{
"employees": [
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
},
{
"name": "David",
"salary": "5000",
"id": 2
}
]
}
Important point here is the name of array i.e employees. JSON server will create the REST APIs based on this. Let’s start our json-server with above file.
$ json-server --watch db.json
\{^_^}/ hi!
Loading db.json
Done
Resources
https://localhost:3000/employees
Home
https://localhost:3000
Type s + enter at any time to create a snapshot of the database
Watching...
Don’t close this terminal, otherwise it will kill the json-server. Below are the sample CRUD requests and responses.
$ curl -X GET -H "Content-Type: application/json" "https://localhost:3000/employees"
[
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
},
{
"name": "David",
"salary": "5000",
"id": 2
}
]
$
$ curl -X GET -H "Content-Type: application/json" "https://localhost:3000/employees/1"
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
}
$
$ curl -X POST -H "Content-Type: application/json" -d '{"name": "Lisa","salary": "2000"}' "https://localhost:3000/employees"
{
"name": "Lisa",
"salary": 2000,
"id": 3
}
$
$ curl -XPUT -H "Content-Type: application/json" -d '{"name": "Lisa", "salary": "8000"}' "https://localhost:3000/employees/3"
{
"name": "Lisa",
"salary": 8000,
"id": 3
}
$
$ curl -X DELETE -H "Content-Type: application/json" "https://localhost:3000/employees/2"
{}
$ curl -GET -H "Content-Type: application/json" "https://localhost:3000/employees"
[
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
},
{
"name": "Lisa",
"salary": 8000,
"id": 3
}
]
$
As you can see that with a simple JSON, json-server creates demo APIs for us to use. Note that all the PUT, POST, DELETE requests are getting saved into db.json
file. Now the URIs for GET and DELETE are same, similarly it’s same for POST and PUT requests. Well, we can create our custom URIs too with a simple mapping file.
Create a file with custom routes for our json-server to use. routes.json
{
"/employees/list": "/employees",
"/employees/get/:id": "/employees/:id",
"/employees/create": "/employees",
"/employees/update/:id": "/employees/:id",
"/employees/delete/:id": "/employees/:id"
}
We can also change the json-server port and simulate like a third party API, just change the base URL when the real service is ready and you will be good to go. Now start the JSON server again as shown below.
$ json-server --port 7000 --routes routes.json --watch db.json
(node:60899) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version.
\{^_^}/ hi!
Loading db.json
Loading routes.json
Done
Resources
https://localhost:7000/employees
Other routes
/employees/list -> /employees
/employees/get/:id -> /employees/:id
/employees/create -> /employees
/employees/update/:id -> /employees/:id
/employees/delete/:id -> /employees/:id
Home
https://localhost:7000
Type s + enter at any time to create a snapshot of the database
Watching...
It’s showing the custom routes defined by us.
Below is the example of some of the commands and their output with custom routes.
$ curl -X GET -H "Content-Type: application/json" "https://localhost:7000/employees/list"
[
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
},
{
"name": "Lisa",
"salary": 8000,
"id": 3
}
]
$ curl -X GET -H "Content-Type: application/json" "https://localhost:7000/employees/get/1"
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
}
$ curl -X POST -H "Content-Type: application/json" -d '{"name": "Lisa","salary": "2000"}' "https://localhost:7000/employees/create"
{
"name": "Lisa",
"salary": 2000,
"id": 4
}
$ curl -XPUT -H "Content-Type: application/json" -d '{"name": "Lisa", "salary": "8000"}' "https://localhost:7000/emloyees/update/4"
{
"name": "Lisa",
"salary": 8000,
"id": 4
}
$ curl -XDELETE -H "Content-Type: application/json" "https://localhost:7000/employees/delete/4"
{}
$ curl -GET -H "Content-Type: application/json" "https://localhost:7000/employees/list"
[
{
"id": 1,
"name": "Pankaj",
"salary": "10000"
},
{
"name": "Lisa",
"salary": 8000,
"id": 3
}
]
$
JSON server provides some other useful options such as sorting, searching and pagination. That’s all for json-server, it’s my go to tool whenever I need to create demo Rest JSON APIs. Reference: json-server GitHub
]]>We will use this knowledge especially in next post, but also in all my coming post’s examples.
First and foremost thing we need to understand is why we need to export a Node JS Module? Node JS has provided almost all required modules (We can check this updates on its official website: https://www.npmjs.com/. It is also known as Node JS Module Repository). But in some realtime applications, we may have some application functionality, which is used many places in that application but not available at Node JS Module Repository. In this scenario, to get Reusability benefit, we need to create our own new Node JS module. Just creating new Module is not enough to use it in other modules of the system. We need to export this module so that other modules will reuse it. If you are a UI or Java Developer, then is familiar to you. If we have any common or reusable component in Java-based application, then we develop it as a separated project, create a Jar file and add it at required projects classpath. Don’t worry too much about how to create a Node JS Module at this point. We will discuss how to create our own new Node JS Module in future post. Node JS Platform has provided a technique to export the following things so that other modules can reuse them without redefining.
To export all these three, we have to use same technique. Node JS has provided an Object “exports” to do this. Syntax: We will see one by one now with some examples. Here we are discussing them theoretically only. We will take one realtime scenario and implement one Node JS Application in my next post. How to export a JavaScript Variable We use Node JS “exports” object to export a Node JS Module’s Variable so that other Modules can reuse it. In realtime, we don’t just export a Variable. However just for practice purpose and also to make it clear to beginners, we are discussing this separately. Syntax: Example: Let us assume that we have created a simple Node JS Project with one JavaScript file: pi.js file with the following content.
var PI = 3.1416
exports.PI = PI;
Here we have exported PI variable as exports.PI with name PI. That’s means other Node JS project can use this variable very easily just using PI name. How to export a JavaScript Function: We use Node JS same “exports” object even to export a Node JS Module’s JavaScript function so that other Modules can reuse it. In realtime, we may do this but it is not recommended to use always. Syntax: Example: Let us assume that we have created a simple Node JS Project with one JavaScript file: arthmetic.js file with the following content.
function add(a,b){
return a + b;
}
function sub(a,b){
return a - b;
}
function mul(a,b){
return a * b;
}
function div(a,b){
return a / b;
}
exports.add = add
exports.sub = sub
exports.mul = mul
exports.div = div
Here we have exported all 4 JavaScript functions separately. That’s means other Node JS project can reuse them directly. It is a simple Arithmetic application. However if we take some realtime Node JS Applications, we can observe plenty of JavaScript files and each file may contain plenty of functions. In this scenario, it is very tedious process to export each and every function separately. To resolve this issue, we need to use Exporting a Node JS module technique as described below. How to export a Node JS Module: We use Node JS same “exports” object even to export a Node JS Module so that other Modules can reuse it. In realtime, it is recommended. Syntax: Example: Let us assume that we have created a simple Node JS Project with one JavaScript file: arthmetic.js file with the following content.
exports.arthmetic = {
var PI = 3.1416;
function add(a,b){
return a + b;
}
function sub(a,b){
return a - b;
}
function mul(a,b){
return a * b;
}
function div(a,b){
return a / b;
}
}
Here we have exported all 4 JavaScript functions and PI variable with just one exports statement. That’s means other Node JS project can reuse all functions and PI very easily.
If we observe a Node JS Application or Module, it may dependent on other existing Node JS modules or our own Modules. The major advantage of Modularity is reusability. We don’t need to redevelop the same existing functionality. We can import a Module and reuse that module functionality very easily. How to import a Node JS Module: We use same technique to import our modules or existing Node JS Modules. Node JS Platform has provided a function call “require()” to import one module into another. Syntax: Here module-name is our required Node JS module name. some-name is reference name to that module. This require() call import the specified module and cached into the application so that we don’t need to import it again and again. Example:
To import our own Node JS module
var arthmetic = require("arthmetic");
To import existing Node JS Module Import Node JS “express” module;
var arthmetic = require("express");
Import Node JS “mongoose” module;
var mongoose = require("mongoose");
This require() call is similar to “import” statement in Java. We use import statement to import a package, class, interface etc into another class or interface. Now we got some knowledge about how to export and import a Node JS module. We will use this knowledge to create our own Node JS Modules in next post.
]]>PATH=C:\Users\[username]\AppData\Roaming\npm;D:\NodeJS.V.0.12.0\;%PATH%
node –v
For Mac OS X, download the pkg installer and run it. You will get below screens in order. As you can see from above image, /usr/local/bin should be in the PATH variable. Usually it’s there by default but you can check it using below command.
pankaj:~ pankaj$ echo $PATH
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Library/Java/JavaVirtualMachines/jdk1.7.0_60.jdk/Contents/Home/bin:/usr/local/apache-maven-3.0.5/bin:
pankaj:~ pankaj$
Below are some sample commands to check the version and start Node CLI and exit it.
pankaj:~ pankaj$ node -v
v0.12.1
pankaj:~ pankaj$ node
> process.exit();
pankaj:~ pankaj$ node
>
(^C again to quit)
>
pankaj:~ pankaj$
Before starting some simple Node JS examples, we will discuss about Major Components of Node JS Platform in next post.
]]>Before starting some Node JS programming examples, it’s important to have an idea about Node JS architecture. We will discuss about “How Node JS works under-the-hood, what type of processing model it is following, How Node JS handles concurrent request with Single-Threaded model” etc. in this post.
As we have already discussed, Node JS applications uses “Single Threaded Event Loop Model” architecture to handle multiple concurrent clients. There are many web application technologies like JSP, Spring MVC, ASP.NET, HTML, Ajax, jQuery etc. But all these technologies follow “Multi-Threaded Request-Response” architecture to handle multiple concurrent clients. We are already familiar with “Multi-Threaded Request-Response” architecture because it’s used by most of the web application frameworks. But why Node JS Platform has chosen different architecture to develop web applications. What is the major differences between multithreaded and single threaded event loop architecture. Any web developer can learn Node JS and develop applications very easily. However without understanding Node JS Internals, we cannot design and develop Node JS Applications very well. So before starting developing Node JS Applications, first we will learn Node JS Platform internals.
Node JS Platform uses “Single Threaded Event Loop” architecture to handle multiple concurrent clients. Then how it really handles concurrent client requests without using multiple threads. What is Event Loop model? We will discuss these concepts one by one. Before discussing “Single Threaded Event Loop” architecture, first we will go through famous “Multi-Threaded Request-Response” architecture.
Any Web Application developed without Node JS, typically follows “Multi-Threaded Request-Response” model. Simply we can call this model as Request/Response Model. Client sends request to the server, then server do some processing based on clients request, prepare response and send it back to the client. This model uses HTTP protocol. As HTTP is a Stateless Protocol, this Request/Response model is also Stateless Model. So we can call this as Request/Response Stateless Model. However, this model uses Multiple Threads to handle concurrent client requests. Before discussing this model internals, first go through the diagram below. Request/Response Model Processing Steps:
Server waits in Infinite loop and performs all sub-steps as mentioned above for all n clients. That means this model creates one Thread per Client request. If more clients requests require Blocking IO Operations, then almost all threads are busy in preparing their responses. Then remaining clients Requests should wait for longer time. Diagram Description:
Here “n” number of Clients Send request to Web Server. Let us assume they are accessing our Web Application concurrently.
Let us assume, our Clients are Client-1, Client-2… and Client-n.
Web Server internally maintains a Limited Thread pool. Let us assume “m” number of Threads in Thread pool.
Web Server receives those requests one by one.
Web Server pickup Client-1 Request-1, Pickup one Thread T-1 from Thread pool and assign this request to Thread T-1
Web Server pickup another Client-2 Request-2, Pickup one Thread T-2 from Thread pool and assign this request to Thread T-2
Web Server pickup another Client-n Request-n, Pickup one Thread T-n from Thread pool and assign this request to Thread T-n
Once Threads are free in Thread Pool and available for next tasks, Server pickup those threads and assign them to remaining Client Requests.
Each Thread utilizes many resources like memory etc. So before going those Threads from busy state to waiting state, they should release all acquired resources.
Drawbacks of Request/Response Stateless Model:
Node JS Platform does not follow Request/Response Multi-Threaded Stateless Model. It follows Single Threaded with Event Loop Model. Node JS Processing model mainly based on Javascript Event based model with Javascript callback mechanism. You should have some good knowledge about how Javascript events and callback mechanism works. If you don’t know, Please go through those posts or tutorials first and get some idea before moving to the next step in this post. As Node JS follows this architecture, it can handle more and more concurrent client requests very easily. Before discussing this model internals, first go through the diagram below. I tried to design this diagram to explain each and every point of Node JS Internals. The main heart of Node JS Processing model is “Event Loop”. If we understand this, then it is very easy to understand the Node JS Internals. Single Threaded Event Loop Model Processing Steps:
Here Client Request is a call to one or more Java Script Functions. Java Script Functions may call other functions or may utilize its Callback functions nature. So Each Client Request looks like as shown below: For Example:
function1(function2,callback1);
function2(function3,callback2);
function3(input-params);
NOTE: -
As I’m a Java Developer, I will try to explain “How Event Loop works” in Java terminology. It is not in pure Java code, I guess everyone can understand this. If you face any issues in understanding this, please drop me a comment.
public class EventLoop {
while(true){
if(Event Queue receives a JavaScript Function Call){
ClientRequest request = EventQueue.getClientRequest();
If(request requires BlokingIO or takes more computation time)
Assign request to Thread T1
Else
Process and Prepare response
}
}
}
That’s all for Node JS Architecture and Node JS single threaded event loop.
]]>Node.js is a JavaScript runtime for server-side programming. It allows developers to create scalable backend functionality using JavaScript, a language many are already familiar with from browser-based web development.
In this guide, we will show you three different ways of getting Node.js installed on a Rocky Linux 8 server:
dnf
to install the nodejs
package from Rocky’s default software repositorydnf
with the Nodesource software repository to install specific versions of the nodejs
packagenvm
, the Node Version Manager, and using it to install and manage multiple versions of Node.jsFor many users, using dnf
with the default package sources will be sufficient. If you need specific newer (or legacy) versions of Node, you should use the Nodesource repository. If you are actively developing Node applications and need to switch between node
versions frequently, choose the nvm
method.
This guide assumes that you are using Rocky Linux 8. Before you begin, you should have a non-root user account with sudo
privileges set up on your system. You can learn how to do this by following the Rocky Linux 8 initial server setup tutorial.
Rocky Linux 8 contains a version of Node.js in its default repositories that can be used to provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is 10.24.0. This will not be the latest version, but it should be stable and sufficient for quick experimentation with the language.
To get this version, you can use the dnf
package manager:
- sudo dnf install nodejs -y
Check that the install was successful by querying node
for its version number:
- node -v
Outputv10.24.0
If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. The Node.js package from Rocky’s default repositories also comes with npm
, the Node.js package manager. This will allow you to install modules and packages to use with Node.js.
At this point you have successfully installed Node.js and npm
using dnf
and the default Rocky software repositories. The next section will show you how to use an alternate repository to install different versions of Node.js.
To install a different version of Node.js, you can use the NodeSource repository. NodeSource is a third party repository that has more versions of Node.js available than the official Rocky repositories. Node.js v14, v16, and v17 are available as of the time of writing.
First, you’ll need to configure the repository locally, in order to get access to its packages. From your home directory, use curl
to retrieve the installation script for your preferred version, making sure to replace 18.x
with your preferred version string (if different).
- cd ~
- curl -sL https://rpm.nodesource.com/setup_18.x -o nodesource_setup.sh
Refer to the NodeSource documentation for more information on the available versions.
You can inspect the contents of the downloaded script with vi
(or your preferred text editor):
- vi nodesource_setup.sh
Running third party shell scripts is not always considered a best practice, but in this case, NodeSource implements their own logic in order to ensure the correct commands are being passed to your package manager based on distro and version requirements. If you are satisfied that the script is safe to run, exit your editor, then run the script with sudo
:
- sudo bash nodesource_setup.sh
Output…
## Your system appears to already have Node.js installed from an alternative source.
Run `sudo yum remove -y nodejs npm` to remove these first.
## Run `sudo yum install -y nodejs` to install Node.js 18.x and npm.
## You may run dnf if yum is not available:
sudo dnf install -y nodejs
## You may also need development tools to build native addons:
sudo yum install gcc-c++ make
## To install the Yarn package manager, run:
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | sudo tee /etc/yum.repos.d/yarn.repo
sudo yum install yarn
The repository will be added to your configuration and your local package cache will be updated automatically. You can now install the Node.js package in the same way you did in the previous section. It may be a good idea to fully remove your older Node.js packages before installing the new version, by using sudo dnf remove nodejs npm
. This will not affect your configurations at all, only the installed versions. Third party repositories don’t always package their software in a way that works as a direct upgrade over stock packages, and if you have trouble, you can always try to revert to a clean slate.
- sudo dnf remove nodejs npm -y
-
- ```command
- sudo dnf install nodejs -y
Verify that you’ve installed the new version by running node
with the -v
version flag:
- node -v
Outputv18.6.0
The NodeSource nodejs
package contains both the node
binary and npm
, so you don’t need to install npm
separately.
At this point you have successfully installed Node.js and npm
using dnf
and the NodeSource repository. The next section will show how to use the Node Version Manager to install and manage multiple versions of Node.js.
Another way of installing Node.js that is particularly flexible is to use nvm, the Node Version Manager. This piece of software allows you to install and maintain many different independent versions of Node.js, and their associated Node packages, at the same time.
To install NVM on your Rocky Linux 8 machine, visit the project’s GitHub page. Copy the curl
command from the README file that displays on the main page. This will get you the most recent version of the installation script.
Before piping the command through to bash
, it is always a good idea to audit the script to make sure it isn’t doing anything you don’t agree with. You can do that by removing the | bash
segment at the end of the curl
command:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh
Take a look and make sure you are comfortable with the changes it is making. When you are satisfied, run the command again with | bash
appended at the end. The URL you use will change depending on the latest version of nvm, but as of right now, the script can be downloaded and executed by typing:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
This will install the nvm
script to your user account. To use it, you must first source your .bashrc
file:
- source ~/.bashrc
Now, you can ask NVM which versions of Node are available:
- nvm list-remote
Output. . .
v16.11.1
v16.12.0
v16.13.0 (LTS: Gallium)
v16.13.1 (LTS: Gallium)
v16.13.2 (LTS: Gallium)
v16.14.0 (LTS: Gallium)
v16.14.1 (LTS: Gallium)
v16.14.2 (LTS: Gallium)
v16.15.0 (LTS: Gallium)
v16.15.1 (LTS: Gallium)
v16.16.0 (Latest LTS: Gallium)
v17.0.0
v17.0.1
v17.1.0
v17.2.0
…
It’s a very long list! You can install a version of Node by typing any of the release versions you see. For instance, to get version v16.16.0 (an LTS release), you can type:
- nvm install v16.16.0
You can see the different versions you have installed by typing:
nvm list
Output-> v16.16.0
system
default -> v16.16.0
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v16.16.0) (default)
stable -> 16.16 (-> v16.16.0) (default)
lts/* -> lts/gallium (-> v16.16.0)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.12 (-> N/A)
lts/fermium -> v14.20.0 (-> N/A)
lts/gallium -> v16.16.0
This shows the currently active version on the first line (-> v16.16.0
), followed by some named aliases and the versions that those aliases point to.
Note: if you also have a version of Node.js installed through dnf
, you may see a system
entry here. You can always activate the system-installed version of Node using nvm use system
.
You can install a release based on these aliases as well. For instance, to install fermium
, run the following:
- nvm install lts/fermium
OutputDownloading and installing node v14.19.0...
Downloading https://nodejs.org/dist/v14.19.0/node-v14.19.0-linux-x64.tar.xz...
################################################################################# 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v14.19.0 (npm v6.14.16)
You can verify that the install was successful using the same technique from the other sections, by typing:
- node -v
Outputv14.19.0
The correct version of Node is installed on our machine as we expected. A compatible version of npm
is also available.
There are quite a few ways to get up and running with Node.js on your Rocky Linux server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in Rocky’s repositories is the easiest method, using nvm
or the NodeSource repository offers additional flexibility.
For more information on programming with Node.js, please refer to our tutorial series How To Code in Node.js.
]]>Sequelize is a Node.js-based Object Relational Mapper that makes it easy to work with MySQL, MariaDB, SQLite, PostgreSQL databases, and more. An Object Relational Mapper performs functions like handling database records by representing the data as objects. Sequelize has a powerful migration mechanism that can transform existing database schemas into new versions. Overall, Sequelize provides excellent support for database synchronization, eager loading, associations, transactions, and database migrations while reducing development time and preventing SQL injections.
In this tutorial, you will install and configure Sequelize with MySQL on your local development environment. Next, you will use Sequelize to create databases and models, as well as perform the insert
, select
, and delete
operations. Then, you will create Sequelize associations for one-to-one, one-to-many, and many-to-many relationships. Finally, you will create Sequelize raw queries for array and object replacements.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To complete this tutorial, you will need:
This tutorial was tested on Node.js version 14.17.6 and npm
version 6.14.15 on macOS Catalina.
In this step, you will install Sequelize and create the connection to your MySQL database. To do that, first you will create a Node.js application. Then, you will install Sequelize, configure the MySQL database, and develop a simple application.
Begin by creating a project folder. In this example, you can use hello-world
. Once the folder is created, navigate to the folder using the terminal:
- mkdir hello-world
- cd hello-world
Then, create a sample Node.js application using the following command:
- npm init
Next, you will be prompted to answer some set-up questions. Use the following output for your configuration. Press ENTER
to use the displayed default value and be sure to set the main entry point as server.js
. This creates a project structure that is easy to maintain.
The output will look as follows, which will populate the package.json
file:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC"
}
Next, create an empty server.js
file inside the project folder:
- touch server.js
After following the previous steps, your final folder structure will look like this:
hello-world/
├─ package.json
├─ server.js
Now you can install Sequelize with the following command:
- npm i sequelize@6.11.0
Note: This command installs version 6.11.0. If you need to install the latest version, run npm i sequelize
.
After these updates, the package.json
file now looks like this:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node server.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"sequelize": "^6.11.0"
}
}
In the dependencies
section, you will now see a Sequelize dependency.
You have set up the project and installed Sequelize. Next, you’ll create a sample database to connect to.
As part of the prerequisites, you installed and configured MySQL, which included creating a user. Now you will create an empty database.
To do that, first, you need to log in to your MySQL instance. If you are running remotely, you can use your preferred tool. If you are using a locally running MySQL instance, you can use the following command, replacing your_username with your MySQL username:
- mysql -u your_username -p
-u
is username and the -p
option is passed if the account is secured with a password.
The MySQL server will ask for your database password. Type your password and press ENTER
.
Once you’re logged in, create a database called hello_world_db
using the following command:
- CREATE DATABASE hello_world_db;
To verify whether you have created the database successfully, you can use this command:
- SHOW DATABASES;
Your output will be similar to this:
+--------------------+
| Database |
+--------------------+
| hello_world_db |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
After creating the sample database, disconnect from the MySQL server:
- mysql> QUIT
Now, you need to install a manual driver for your database of choice. As Sequelize provides ORM features only, it doesn’t include built-in database drivers. Therefore, you’ll need to install drivers according to your preference. To do that, navigate to the project directory using the terminal and install the MySQL driver to the project using the following command:
- npm install --save mysql2
In this case, you are using the driver for MySQL.
Note: Since this tutorial uses MySQL as the database, you are using a driver for that. Depending on your database, you can manually install the driver like so:
npm install --save pg pg-hstore # Postgres
npm install --save mysql2
npm install --save mariadb
npm install --save sqlite3
npm install --save tedious # Microsoft SQL Server
Now that you have a sample database, you can create your first Sequelize application with database connectivity.
In this section, you will connect the Node.js application to the MySQL database using Sequelize.
To connect to the database, open server.js
for editing using nano
or your preferred code editor:
- nano server.js
Here, you will create a database connection in your application using a Sequelize instance. In the new Sequelize()
method, pass the MySQL server parameters and database credentials as follows, replacing DATABASE_USERNAME
and DATABASE_PASSWORD
with the credentials of your MySQL user:
const Sequelize = require("sequelize");
const sequelize = new Sequelize(
'hello_world_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
host
is where the MySQL server is hosted, so you’ll need to provide a server URL or an IP address. If you are using a locally installed MySQL server, you can replace DATABASE_HOST
with localhost
or 127.0.0.1
as the value.
Similarly, if you are using a remote server, make sure to replace database connection values accordingly with the appropriate remote server details.
Note: If you are using any other database server software, you can replace the dialect parameter accordingly. `dialect: ‘mysql’, ‘mariadb’, ‘postgres’, ‘mssql’.
Next, call a promise-based authenticate()
method to instantiate a database connection to the application. To do that, add the following code block to the your server.js
file:
...
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
The authenticate()
method is used to connect with the database and tests whether the given credentials are correct. Here, the database connection is open by default and the same connection can be used for all queries. Whenever you need to close the connection, call the sequelize.close()
method after this authenticate()
call. To learn more about Sequelize, please see their getting started guide.
Most of the methods provided by Sequelize are asynchronous. That means you can run processes in your application while an asynchronous code block is in its execution time. Also, after the successful asynchronous code block execution, it returns a promise, which is the value returned at the end of a process. Therefore, in asynchronous code blocks, you can use then()
, catch()
, and finally()
to return the processed data.
At this point, the server.js
file will look like the following:
const Sequelize = require("sequelize");
const sequelize = new Sequelize(
'hello_world_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
Save and close your file.
In the project directory, run the server.js
application by running the following command:
- node server.js
Your output will look like this:
OutputConnection has been established successfully!
You have created the database connection successfully.
In this step, you installed Sequelize, created a sample database, and used Sequelize to connect with the database. Next, you will work with models in Sequelize.
Now that you have created a sample MySQL database, you can use Sequelize to create a table and populate it with data. In Sequelize, database tables are referred to as models. A model is an abstraction that represents a table of the database. Models define several things to Sequelize, such as the name of the table, column details, and data types. In this step, you will create a Sequelize model for book data.
To begin, create a new file called book.model.js
in the project directory:
- nano book.model.js
Similar to the the previous step, add a Sequelize code for database initiation with a new import for DataTypes
at the top of the file:
const { Sequelize, DataTypes } = require("sequelize");
Sequelize contains many built-in data types. To access those data types, you add an import for DataTypes
. This tutorial refers to some frequently used data types, such as STRING
, INTEGER
, and DATEONLY
. To learn more about other supported data types, you can refer to the official Sequelize documentation.
Then, include the lines you used previously to create a connection to your MySQL database, updating your MySQL credentials accordingly:
...
const sequelize = new Sequelize(
'hello_world_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
Next, you will create a model called books
, which includes title
, author
, release_date
, and subject
ID. To do that, use the sequelize.define()
method as shown:
...
const Book = sequelize.define("books", {
title: {
type: DataTypes.STRING,
allowNull: false
},
author: {
type: DataTypes.STRING,
allowNull: false
},
release_date: {
type: DataTypes.DATEONLY,
},
subject: {
type: DataTypes.INTEGER,
}
});
The sequelize.define()
method defines a new model, which represents a table in the database. This code block creates a table called books
and stores the book records according to the title
, author
, release_date
, and subject
.
In this code, allowNull
shows that the model column value cannot be null
. Likewise, if you need to set such a value, you can use defaultValue: "value"
.
Next, you’ll add the book
model to your database. To do that, you’ll use the sync()
method as follows:
...
sequelize.sync().then(() => {
console.log('Book table created successfully!');
}).catch((error) => {
console.error('Unable to create table : ', error);
});
In the sync()
method, you’re asking Sequelize to do a few things to the database. With this call, Sequelize will automatically perform an SQL query to the database and create a table, printing the message Book table created successfully!
.
As mentioned, the sync() method is a promise-based method, which means it can also perform error handling. In this code block, you’ll check whether the table is created successfully. If not, it will return an error via the catch method and print it on the output.
Note: You can manage model synchronization by passing force
parameters to force the creation of a new table if it does not exist, or else use an existing one. Here are some examples, which may be helpful to you while working with Sequelize:
model.sync()
: This creates the table if it doesn’t exist already.model.sync({ force: true })
: This creates the table by dropping it if the same table exists already.The final code will look like this:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'hello_world_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
const Book = sequelize.define("books", {
title: {
type: DataTypes.STRING,
allowNull: false
},
author: {
type: DataTypes.STRING,
allowNull: false
},
release_date: {
type: DataTypes.DATEONLY,
},
subject: {
type: DataTypes.INTEGER,
}
});
sequelize.sync().then(() => {
console.log('Book table created successfully!');
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Save and close your file.
Run your application by using the following command:
- node book.model.js
You will get the following output in your command line:
OutputExecuting (default): SELECT 1+1 AS result
Executing (default): CREATE TABLE IF NOT EXISTS `books` (`id` INTEGER NOT NULL auto_increment , `title` VARCHAR(255) NOT NULL, `author` VARCHAR(255) NOT NULL, `release_date` DATE, `subject` INTEGER, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB;
Connection has been established successfully.
Executing (default): SHOW INDEX FROM `books`
Book table created successfully!
In the output, you will see the return log contains the message, Book table created successfully!
. You can verify this by checking your database to see the new books
table created in the hello_world_db
database.
To verify the creation of the new table, log into your MySQL instance:
- mysql -u YOUR_USERNAME -p
After inputting your password, change into the sample database:
- USE hello_world_db;
And then run the command to show tables:
- SHOW TABLES;
Your output will be similar to this:
+---------------------------+
| Tables_in_hello_world_db |
+---------------------------+
| books |
+---------------------------+
1 row in set (0.00 sec)
Finally, disconnect from the MySQL server:
- mysql> QUIT
You have verified that the book
model creation was successful. Using this process, you can create any number of models by following the same procedure.
In this step, you created a model in a database and initiated working with a model using built-in methods. You also used Sequelize-supported data types to define your model. Next, you will work with basic model queries.
In this step, you will use the Sequelize built-in queries for insertion, selection, selection with conditional clauses, and deletion.
In the previous step, you created a book
model inside the database. In this section, you’ll insert data into this model.
To get started, copy the contents of book.model.js
from the previous step. Create a new file called book.controller.js
to handle the query logic. Add the code from book.model.js
to book.controller.js
.
In book.controller.js
, locate the sync()
method. In the sync()
method, add the following highlighted lines:
...
sequelize.sync().then(() => {
console.log('Book table created successfully!');
Book.create({
title: "Clean Code",
author: "Robert Cecil Martin",
release_date: "2021-12-14",
subject: 3
}).then(res => {
console.log(res)
}).catch((error) => {
console.error('Failed to create a new record : ', error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Here, you insert a new book record into the books
model you’ve already created using the sync()
method, which supports adding new records to previously created models. Once the sync() method executes successfully, it runs the then()
method. Inside the then()
method, you call create()
method to insert the new records to the model.
You use the create()
method to pass the data you need to add to the database as an object. The highlighted section of code will insert a new entry to your existing books
table. In this example, you add Clean Code
by Robert Cecil Martin
, which has been categorized with the subject
ID of 3
. You can use the same code, updated with information for other books, to add new records to your database.
Save and close the file.
Run the application using the following command:
- node book.controller.js
Your output will look similar to the following:
Outputbooks {
dataValues:
{ id: 1,
title: 'Clean Code',
author: 'Robert Cecil Martin',
release_date: '2021-12-14',
subject: 3,
updatedAt: 2021-12-14T10:12:16.644Z,
...
}
You inserted a new record to the model you created in the database. You can continue adding multiple records using the same process.
In this section, you will select and get all the book records from the database using the findAll()
method. To do that, first open book.controller.js
and remove the previous Book.create()
method. In the sync()
method, add the Book.findAll()
method as shown:
...
sequelize.sync().then(() => {
Book.findAll().then(res => {
console.log(res)
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
...
Save and close the file.
Next, run the application again using the following command:
- node book.controller.js
Your output will look similar to the following:
Output[
books {
dataValues: {
id: 1,
title: 'Clean Code',
author: 'Robert Cecil Martin',
release_date: '2020-01-01',
subject: 3,
createdAt: 2021-02-22T09:13:55.000Z,
updatedAt: 2021-02-22T09:13:55.000Z
},
_previousDataValues: {
id: 1,
title: 'Clean Code',
author: 'Robert Cecil Martin',
release_date: '2020-01-01',
subject: 3,
createdAt: 2021-02-22T09:13:55.000Z,
updatedAt: 2021-02-22T09:13:55.000Z
},
...
]
The output contains all book data as an array object. You successfully used the Sequelize findAll()
method to return all book data from the database.
where
ClauseIn this section, you will select values with conditions using the where
clause. The where
clause is used to specify a condition while fetching data. For this tutorial, you will get a book by a specific record ID from the database using the findOne()
method.
To do that, open book.controller.js
for editing, delete the findAll()
method, and add the following lines:
...
sequelize.sync().then(() => {
Book.findOne({
where: {
id : "1"
}
}).then(res => {
console.log(res)
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Here, you select a specific book record from the database using the findOne()
method with the where
option. In this example, you are retrieving the book data whose id
is equal to 1
.
Save and close the file.
Next, run the application:
- node book.controller.js
Your output will look similar to the following:
Outputbooks {
dataValues: {
id: 1,
title: 'Clean Code',
author: 'Robert Cecil Martin',
release_date: '2020-01-01',
subject: 'Science',
createdAt: 2021-02-22T09:13:55.000Z,
updatedAt: 2021-02-22T09:13:55.000Z
},
...
}
You have successfully used where
clauses to get data from Sequelize models. You can use the where
clause in the database application to capture conditional data.
To delete a specific record from the database model, you use the destroy()
method with the where
option. To do that, open book.controller.js
, remove the findOne()
method, and add the following highlighted lines:
...
sequelize.sync().then(() => {
Book.destroy({
where: {
id: 2
}
}).then(() => {
console.log("Successfully deleted record.")
}).catch((error) => {
console.error('Failed to delete record : ', error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Here, you remove a book record from the database by using the destroy()
method with the where
option and passing in the id
of the book to remove. You are going to remove the book record whose id
equals 2
.
Save and close the file.
Next, run the application:
- node book.controller.js
Your output will look like the following:
OutputSuccessfully deleted record.
The record has been deleted.
In this step, you experimented with your database model and model querying. You initiated the database, created models, inserted records, retrieved records, retrieved records with conditions using the where
clause, and deleted selected records. With this knowledge of Sequelize, you will now create associations in Sequelize. After that, you will be able to define and work with a variety of relationships using Sequelize models.
In this step, you will use the standard association types that Sequelize supports: one-to-one, one-to-many, and many-to-many associations. You’ll use sample data about students, courses, and grade levels.
Sequelize uses association types based on the following database relationships:
one-to-one relationship: A one-to-one relationship means a record in one table is associated with exactly one record in another table. In terms of Sequelize, you can use belongsTo()
and hasOne()
associations to create this type of relationship.
one-to-many relationship: A one-to-many relationship means a record in one table is associated with multiple records in another table. With Sequelize, you can use hasMany()
associations methods to create this type of relationship.
many-to-many relationship: A many-to-many relationship means multiple records in one table are associated with multiple records in another table. With Sequelize, you can use belongsToMany()
associations to create this type of relationship.
Before creating these associations, you will first create a new database called student_db
and add new models and some sample data for students, courses, and grade level.
To create the database, follow the same process in Step 1 — Installing and Configuring Sequelize to log into MySQL and create a database called student_db
. Once the new database has been created, log out of MySQL. Next, you’ll start creating database associations.
belongsTo()
In this section, you will create a one-to-one relationship using Sequelize models. Imagine you want to get one student’s details along with their grade level. Since one student can have only one grade level, this type of association is a one-to-one relationship and you can use the belongsTo()
method.
Note: There is a difference between belongsTo()
and hasOne()
. belongsTo()
will add the foreignKey
on the source table, whereas hasOne()
will add it to the target table. In any case, if both relationships are used at the same time, it will work as Sequelize bidirectional one-to-one relationships.
The belongsTo()
method allows you to create a one-to-one relationship between two Sequelize models. In this example, you are using the Student
and Grade
models.
Create a new file called one_to_one.js
. As you did in the previous section, Connecting to the MySQL Database, include the lines to create a connection to the database and authenticate your MySQL user to the top of the file. Be sure to update the MySQL credentials as needed:
const { Sequelize, DataTypes } = require("sequelize");
const sequelize = new Sequelize(
'student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
In this section, you will create three models in the new student_db
database: Student
, Grade
, and Course
. You’ll begin by creating the Student
and Grade
models. Later in this step, you’ll create the Courses
model.
For the Student
model, add the following code block to one_to_one.js
:
...
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
This student model contains two columns: student_id
and name
.
Next, add a code block for the Grade
model:
...
const Grade = sequelize.define("grades", {
grade: {
type: DataTypes.INTEGER,
allowNull: false
}
});
The Grade
model contains the column grade
.
To demonstrate the associations, you’ll need to add sample data to the database. For that, you’ll use the bulk()
method. Rather than inserting data into the rows one by one, the bulkCreate()
method allows you to insert multiple rows into your database models at once.
So now, import the Grade
and Student
data to their respective models in the database as shown:
...
const grade_data = [{grade : 9}, {grade : 10}, {grade : 11}]
const student_data = [
{name : "John Baker", gradeId: 2},
{name : "Max Butler", gradeId: 1},
{name : "Ryan Fisher", gradeId: 3},
{name : "Robert Gray", gradeId: 2},
{name : "Sam Lewis", gradeId: 1}
]
sequelize.sync({ force: true }).then(() => {
Grade.bulkCreate(grade_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
…
}).catch((err) => { console.log(err); });
}).catch((err) => { console.log(err); });
}).catch((error) => {
console.error('Unable to create the table : ', error);
});
Here, you provide sample data and import the data into the Student
and Grade
models. With your database, models, and sample data in place, you’re ready to create associations.
In one-to-one.js
, add the following line below the student_data
block:
...
Student.belongsTo(Grade);
Next, you will need to check whether the association is working properly. To do that, you can retrieve all students’ data with associated grade levels by passing the include
parameter inside the findAll()
method.
Since you need to get the student grade level, you’ll pass Grade
as the model. In the sequelize.sync()
method, add the highlighted lines as shown:
...
sequelize.sync({ force: true }).then(() => {
Grade.bulkCreate(grade_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
Student.findAll({
include: [{
model: Grade
}]
}).then(result => {
console.log(result)
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((err) => { console.log(err); });
}).catch((err) => { console.log(err); });
}).catch((error) => {
console.error('Unable to create the table : ', error);
});
The complete code looks like the following:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
const Grade = sequelize.define("grades", {
grade: {
type: DataTypes.INTEGER,
allowNull: false
}
});
const grade_data = [{grade : 9}, {grade : 10}, {grade : 11}]
const student_data = [
{name : "John Baker", gradeId: 2},
{name : "Max Butler", gradeId: 1},
{name : "Ryan Fisher", gradeId: 3},
{name : "Robert Gray", gradeId: 2},
{name : "Sam Lewis", gradeId: 1}
]
// One-To-One association
Student.belongsTo(Grade);
sequelize.sync({ force: true }).then(() => {
Grade.bulkCreate(grade_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
Student.findAll({
include: [{
model: Grade
}]
}).then(result => {
console.log(result)
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((err) => { console.log(err); });
}).catch((err) => { console.log(err); });
}).catch((error) => {
console.error('Unable to create the table : ', error);
});
Save and close your file.
Run the file by using the following command:
- node one_to_one.js
The output will be long, and you will see all students’ data with grade levels. Here is a snippet of the output showing student data:
Outputstudents {
dataValues:
{ student_id: '3e786a8f-7f27-4c59-8e9c-a8c606892288',
name: 'Sam Lewis',
createdAt: 2021-12-16T08:49:38.000Z,
updatedAt: 2021-12-16T08:49:38.000Z,
gradeId: 1,
grade: [grades] },
_previousDataValues:
...
Depending on the command line tools you are using, the output may print as an expanded view or not. If it is an expanded view, it prints the expanded grade
object as the output.
In this section, you created a one-to-one relationship using the Student.belongsTo(Grade);
method call and got the details according to the association you created.
hasMany()
In this section, you will create a one-to-many relationship using Sequelize models. Imagine you’d like to get all the students associated with a selected grade level. Since one specific grade level can have multiple students, this is a one-to-many relationship.
To get started, copy the contents of one_to_one.js
into a new file called one_to_many.js
. In one_to_many.js
, remove the lines after the student_data
block. Your one_to_many.js
file will look like this:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
const Grade = sequelize.define("grades", {
grade: {
type: DataTypes.INTEGER,
allowNull: false
}
});
const grade_data = [ {grade : 9}, {grade : 10}, {grade : 11}]
const student_data = [
{name : "John Baker", gradeId: 2},
{name : "Max Butler", gradeId: 1},
{name : "Ryan Fisher", gradeId: 3},
{name : "Robert Gray", gradeId: 2},
{name : "Sam Lewis", gradeId: 1}
]
After the student_data
block, use the hasMany()
method to create a new relationship:
...
Grade.hasMany(Student)
The hasMany()
method allows you to create a one-to-many relationship between two Sequelize models. Here, you are using the Grade
and Student
models.
Next, add the sequelize.sync()
method with the findAll()
method below the hasMany()
line:
...
sequelize.sync({ force: true }).then(() => {
Grade.bulkCreate(grade_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
Grade.findAll({
where: {
grade: 9
},
include: [{
model: Student
}]
}).then(result => {
console.dir(result, { depth: 5 });
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((err) => { console.log(err); });
}).catch((err) => { console.log(err); });
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Here you are trying to access all the students in a particular grade level—in this case, all the students in grade 9
. You also added the Student
model in the include
option.
Here is the complete code:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
const Grade = sequelize.define("grades", {
grade: {
type: DataTypes.INTEGER,
allowNull: false
}
});
const grade_data = [ {grade : 9}, {grade : 10}, {grade : 11}]
const student_data = [
{name : "John Baker", gradeId: 2},
{name : "Max Butler", gradeId: 1},
{name : "Ryan Fisher", gradeId: 3},
{name : "Robert Gray", gradeId: 2},
{name : "Sam Lewis", gradeId: 1}
]
// One-To-Many relationship
Grade.hasMany(Student);
sequelize.sync({ force: true }).then(() => {
Grade.bulkCreate(grade_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
Grade.findAll({
where: {
grade: 9
},
include: [{
model: Student
}]
}).then(result => {
console.dir(result, { depth: 5 });
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((err) => { console.log(err); });
}).catch((err) => { console.log(err); });
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Save and close your file.
Run the file with the following command:
- node one_to_many.js
The output will look similar to the following. It will be quite long, but all students in grade 9
will be returned as follows:
Output[ grades {
dataValues:
{ id: 1,
grade: 9,
createdAt: 2021-12-20T05:12:31.000Z,
updatedAt: 2021-12-20T05:12:31.000Z,
students:
[ students {
dataValues:
{ student_id: '8a648756-4e22-4bc0-8227-f590335f9965',
name: 'Sam Lewis',
createdAt: 2021-12-20T05:12:31.000Z,
updatedAt: 2021-12-20T05:12:31.000Z,
gradeId: 1 },
...
students {
dataValues:
{ student_id: 'f0304585-91e5-4efc-bdca-501b3dc77ee5',
name: 'Max Butler',
createdAt: 2021-12-20T05:12:31.000Z,
updatedAt: 2021-12-20T05:12:31.000Z,
gradeId: 1 },
...
In this section, you created a one-to-many relationship using the Grade.hasMany(Student);
method call. In the output, you retrieved the details according to the association you created.
belongsToMany()
In this section, you will create many-to-many relationships using Sequelize models. As an example, imagine a situation where students are enrolled in courses. One student can enroll in many courses and one course can have many students. This is a many-to-many relationship. To implement this using Sequelize, you will use the models Student
, Course
, and StudentCourse
with the belongsToMany()
method.
To get started, create a file called many_to_many.js
and add the database initiation and authentication code blocks as follows. (You can reuse the code blocks from the previous one_to_many.js
example.) Make sure to update the highlighted database connection values as needed.
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
Next, you’ll create the database models for many-to-many relationships: Student
and Course
. Then you’ll add some sample data to those models.
...
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
const Course = sequelize.define("courses", {
course_name: {
type: DataTypes.STRING,
allowNull: false
}
});
const StudentCourse = sequelize.define('StudentCourse', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true,
allowNull: false
}
});
const course_data = [
{course_name : "Science"},
{course_name : "Maths"},
{course_name : "History"}
]
const student_data = [
{name : "John Baker", courseId: 2},
{name : "Max Butler", courseId: 1},
{name : "Ryan Fisher", courseId: 3},
{name : "Robert Gray", courseId: 2},
{name : "Sam Lewis", courseId: 1}
]
const student_course_data = [
{studentId : 1, courseId: 1},
{studentId : 2, courseId: 1},
{studentId : 2, courseId: 3},
{studentId : 3, courseId: 2},
{studentId : 1, courseId: 2},
]
Here, you create the Student
and Course
models and provide some sample data. You also set a courseID
, which you will use to retrieve students according to this relationship type.
Finally, you defined a new model called StudentCourse
, which manages the relationship data between Student
and Course
. In this example, studentId 1
is enrolled in courseId 1
and courseId 2
.
You have completed the database initiation and added sample data to the database. Next, create many-to-many relationships using the belongsToMany()
method as shown:
...
Course.belongsToMany(Student, { through: 'StudentCourse'})
Student.belongsToMany(Course, { through: 'StudentCourse'})
Within the belongsToMany()
method, you pass the through
configuration with the name of the model as the configuration option. In this case, it is StudentCourse
. This is the table that manages the many-to-many relationships.
Finally, you can check whether the association is working properly by retrieving all course data with associated students. You’ll do that by passing the include
parameter inside the findAll()
method. Add the following lines to many_to_many.js
:
...
sequelize.sync({ force: true }).then(() => {
Course.bulkCreate(course_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
StudentCourse.bulkCreate(student_course_data, { validate: true }).then(() => {
Course.findAll({
include: {
model: Student,
},
}).then(result => {
console.log(result);
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
The complete code looks like the following:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
const Course = sequelize.define("courses", {
course_name: {
type: DataTypes.STRING,
allowNull: false
}
});
const StudentCourse = sequelize.define('StudentCourse', {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true,
allowNull: false
}
});
const course_data = [
{course_name : "Science"},
{course_name : "Maths"},
{course_name : "History"}
]
const student_data = [
{name : "John Baker", courseId: 2},
{name : "Max Butler", courseId: 1},
{name : "Ryan Fisher", courseId: 3},
{name : "Robert Gray", courseId: 2},
{name : "Sam Lewis", courseId: 1}
]
const student_course_data = [
{studentId : 1, courseId: 1},
{studentId : 2, courseId: 1},
{studentId : 2, courseId: 3},
{studentId : 3, courseId: 2},
{studentId : 1, courseId: 2},
]
Course.belongsToMany(Student, { through: 'StudentCourse'})
Student.belongsToMany(Course, { through: 'StudentCourse'})
sequelize.sync({ force: true }).then(() => {
Course.bulkCreate(course_data, { validate: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then(() => {
StudentCourse.bulkCreate(student_course_data, { validate: true }).then(() => {
Course.findAll({
include: {
model: Student,
},
}).then(result => {
console.log(result);
}).catch((error) => {
console.error('Failed to retrieve data : ', error);
});
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Save and close the file.
Run the file using the following command:
- node many_to_many.js
The output will be long, but will look something similar to the following:
Output[ courses {
dataValues:
{ id: 1,
course_name: 'Science',
createdAt: 2022-05-11T04:27:37.000Z,
updatedAt: 2022-05-11T04:27:37.000Z,
students: [Array] },
_previousDataValues:
{ id: 1,
course_name: 'Science',
createdAt: 2022-05-11T04:27:37.000Z,
updatedAt: 2022-05-11T04:27:37.000Z,
students: [Array] },
_changed: Set {},
_options:
{ isNewRecord: false,
_schema: null,
_schemaDelimiter: '',
include: [Array],
includeNames: [Array],
includeMap: [Object],
includeValidated: true,
attributes: [Array],
raw: true },
isNewRecord: false,
students: [ [students], [students] ] },
courses {
dataValues:
{ id: 2,
course_name: 'Maths',
createdAt: 2022-05-11T04:27:37.000Z,
updatedAt: 2022-05-11T04:27:37.000Z,
students: [Array] },
_previousDataValues:
...
As you can see in this output, the courses with associated students were retrieved. Within the courses
block, you will see separate id
values that indicate each course. For example, id: 1
is connected to the course_name: Science
for the Science class, whereas id: 2
is the Maths class, and so on.
In the database, you can see the three generated tables with the sample data you inserted.
In this step, you used Sequelize to create one-to-one, one-to-many, and many-to-many associations. Next, you will work with raw queries.
In this step, you will work with raw queries in Sequelize. In previous steps, you used Sequelize built-in methods, such as insert()
and findAll()
, to handle data insertion and selection from the database. You may have noticed that those methods follow a specific pattern for writing a query. However, with the use of raw queries, you don’t need to worry about Sequelize built-in methods and patterns. Using your knowledge of SQL queries, you can conduct a range of queries in Sequelize from simple to more advanced.
Here is an example of raw queries that perform the action of selecting all values from a particular table, deleting the selected values according to the condition, and updating the table with the given values.
SELECT * FROM table_name;
DELETE FROM table_name WHERE condition;
UPDATE table_name SET y = 42 WHERE x = 12;
In Sequelize, raw queries can be used with primarily two methodologies: array replacement and object replacement. When you are passing values to the SQL query, you can use either an array or an object to do that replacement.
Before writing a raw query, you will first need to supply student data in a sample database. Following the previous section, Creating a Sample Database, log in to MySQL, create a database called sample_student_db
, and log out of MySQL.
Next, you’ll add some raw data to start working with raw queries. Create a new file called add_student_records.js
and add the following code blocks, which contain the previously discussed Sequelize methods of authenticate()
, sync()
, and bulkCreate()
.
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'sample_student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
const Student = sequelize.define("students", {
student_id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
},
name: {
type: DataTypes.STRING,
allowNull: false
}
});
const student_data = [
{name : "John Baker"},
{name : "Max Butler"},
{name : "Ryan Fisher"},
{name : "Robert Gray"},
{name : "Sam Lewis"}
]
sequelize.sync({ force: true }).then(() => {
Student.bulkCreate(student_data, { validate: true }).then((result) => {
console.log(result);
}).catch((error) => {
console.log(error);
});
}).catch((error) => {
console.error('Unable to create table : ', error);
});
Here, you initiate the database connection, create the model, and insert a few student records inside the new database.
Save and close the file.
Next, run this script using the following command:
- node add_student_records.js
The output will be something similar to the following. It will be quite long, but all the student records which you inserted will be returned as follows. Note that since the student_id
is an auto-generated UUID (Universally Unique Identifiers) value, it will be different depending on the user.
OutputExecuting (default): SELECT 1+1 AS result
Executing (default): DROP TABLE IF EXISTS `students`;
Connection has been established successfully.
Executing (default): DROP TABLE IF EXISTS `students`;
Executing (default): CREATE TABLE IF NOT EXISTS `students` (`student_id` CHAR(36) BINARY , `name` VARCHAR(255) NOT NULL, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`student_id`)) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `students`
Executing (default): INSERT INTO `students` (`student_id`,`name`,`createdAt`,`updatedAt`) VALUES ('45d1f26c-ba76-431f-ac5f-f41282351710','John Baker','2022-06-03 07:27:49','2022-06-03 07:27:49'),('1cb4e34d-bfcf-4a97-9624-e400b9a1a5f2','Max Butler','2022-06-03 07:27:49','2022-06-03 07:27:49'),('954c576b-ba1c-4dbc-a5c6-8eaf22bbbb04','Ryan Fisher','2022-06-03 07:27:49','2022-06-03 07:27:49'),('e0f15cd3-0025-4032-bfe8-774e38e14c5f','Robert Gray','2022-06-03 07:27:49','2022-06-03 07:27:49'),('826a0ec9-edd0-443f-bb12-068235806659','Sam Lewis','2022-06-03 07:27:49','2022-06-03 07:27:49');
[
students {
dataValues: {
student_id: '45d1f26c-ba76-431f-ac5f-f41282351710'`,
name: 'John Baker',
createdAt: 2022-06-03T07:27:49.453Z,
updatedAt: 2022-06-03T07:27:49.453Z
},
_previousDataValues: {
name: 'John Baker',
student_id: '45d1f26c-ba76-431f-ac5f-f41282351710',
createdAt: 2022-06-03T07:27:49.453Z,
updatedAt: 2022-06-03T07:27:49.453Z
},
…
In the next section, you will apply raw queries using one of the student_id
outputs in the code block above. Copy it down so that you have it for the next sections, where you will use the query()
method for array and object replacements.
In this section, you’ll use the query()
method for an array replacement. With this method, Sequelize can execute raw or already prepared SQL queries.
To get started, copy the contents of the server.js
file from Step 1, as that includes the initiate Sequelize()
method and database initiation. Paste the contents into a new file called array_raw_query.js
. Update the database name to sample_student_db
:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'sample_student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
At the end of the file, add the following code block for an array replacement, making sure to replace REPLACE_STUDENT_ID
with the student_id
value that you copied in the previous section.
...
sequelize.query(
'SELECT * FROM students WHERE student_id = ?',
{
replacements: ['REPLACE_STUDENT_ID'],
type: sequelize.QueryTypes.SELECT
}
).then(result => {
console.log(result);
}).catch((error) => {
console.error('Failed to insert data : ', error);
});
For array replacement, you pass the query()
method with the SQL query and the configuration object. It contains the replacements
value and type. To replacements, you pass data as an array and catch those values using the question mark (?
) symbol.
Next, since you need to get data about a specific student, the student_id
is passed as the second parameter. After that, you pass the type: sequelize.QueryTypes.SELECT
key-value pair, which you can use to select data from the database.
There are some other types as well, such as QueryTypes.UPDATE
and QueryTypes.DELETE
. Depending on the requirement, you can select the type that suits your purpose.
The following shows the full code block. Here you connect to the database and retrieve the selected student data using a raw query.
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'sample_student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
sequelize.query(
'SELECT * FROM students WHERE student_id = ?',
{
replacements: ['REPLACE_STUDENT_ID'],
type: sequelize.QueryTypes.SELECT
}
).then(result => {
console.log(result);
}).catch((error) => {
console.error('Failed to insert data : ', error);
});
Save and close your file.
Next, you can run this script using the following command:
- node array_raw_query.js
You will see output similar to the following:
OutputConnection has been established successfully.
[ { student_id: 'STUDENT_ID_YOU_RETRIEVED',
name: 'Robert Gray',
createdAt: 2022-05-06T13:14:50.000Z,
updatedAt: 2022-05-06T13:14:50.000Z } ]
Due to the selected student_id
, your output values may differ.
On the surface, object replacement is similar to array replacement, but the pattern of passing data to the raw query is different. In the replacement option, you pass data as an object, and in the query option, you use values like :key
.
To get started, create a new file called object_raw_query.js
and paste the complete code blocks from the server.js
file, updating the database to sample_student_db
.
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'sample_student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
Then, add the following code block to the end of the new object_raw_query.js
file:
...
sequelize.query(
'SELECT * FROM students WHERE student_id = :id',
{
replacements: { id: 'REPLACE_STUDENT_ID' },
type: sequelize.QueryTypes.SELECT
}
).then(result => {
console.log(result);
}).catch((error) => {
console.error('Failed to insert data : ', error);
});
Here, you get selected student data using the object replacement method. You create a replacement
object, setting the id
as the student information you wish to retrieve: { id: 'REPLACE_STUDENT_ID' }
.
In the query()
, you indicate: 'SELECT * FROM students WHERE student_id = :id'
. Using the query()
method, you pass the replacement value as an object, which is why this method is known as object replacement.
Here is the complete code:
const {Sequelize, DataTypes} = require("sequelize");
const sequelize = new Sequelize(
'sample_student_db',
'DATABASE_USERNAME',
'DATABASE_PASSWORD',
{
host: 'DATABASE_HOST',
dialect: 'mysql'
}
);
sequelize.authenticate().then(() => {
console.log('Connection has been established successfully.');
}).catch((error) => {
console.error('Unable to connect to the database: ', error);
});
sequelize.query(
'SELECT * FROM students WHERE student_id = :id',
{
replacements: { id: 'REPLACE_STUDENT_ID' },
type: sequelize.QueryTypes.SELECT
}
).then(result => {
console.log(result);
}).catch((error) => {
console.error('Failed to insert data : ', error);
});
Save and close the file.
Next, run this script using the following command:
- node object_raw_query.js
The output will look similar to the following:
OutputConnection has been established successfully.
[ { student_id: 'STUDENT_ID_YOU_RETRIEVED',
name: 'Robert Gray',
createdAt: 2022-05-06T13:14:50.000Z,
updatedAt: 2022-05-06T13:14:50.000Z } ]
Due to the selected student_id
, your output values may differ.
In this step, you worked with Sequelize raw queries using two different methodologies: array replacement and object replacement.
In this tutorial, you installed and configured Sequelize. You also created and worked with models, which is one of the mandatory components of Sequelize. Finally, you created different types of associations and worked with raw queries using practical examples.
Next, you can use different data types to create database models. You can also update and delete records in databases with both built-in methods and raw queries.
To learn more about Sequelize, check out the product documentation.
]]>Lerna is a tool for managing JavaScript projects with multiple packages. Lerna manages monorepos, which can hold projects containing multiple packages within itself.
Monorepos can be challenging to manage because sequential builds and publishing individual packages take a long time. Lerna provides features such as package bootstrapping, parallelized builds, and artifactory publication, which can aid you when managing monorepos. Lerna is beneficial for projects that share common dependencies.
In this tutorial, you will install Lerna, create a working directory, initialize a Lerna project, create a monorepo, bootstrap your packages, and add a dependency to all the packages.
To complete this tutorial, you will need:
npm
version 5.2 or higher on your local machine. If you did not install npm
alongside Node.js, do that now. For Linux, use the command sudo apt install npm
.In this step, you will install Lerna and set up your project. With Lerna, you can manage common packages across your projects, which is helpful when working with monorepos.
Start by installing Lerna with the following command:
- npm i -g lerna
npm i
will use npm
to install the lerna
package. The -g
flag means that the lerna
command is globally available via the terminal.
Note: If you run into permission errors, you may need to rerun the command with administrator access. Using sudo
for non-root users should do the trick.
You can now change the directory into a location of your choice and create a sample working directory to house the Lerna project.
Run the following command to create your sample working directory:
- mkdir lerna-demo
- cd lerna-demo
You can now run init
within your directory:
- lerna init
You will see the following output:
Output...
lerna notice cli v4.0.0
lerna info Initializing Git repository
lerna info Creating package.json
lerna info Creating lerna.json
lerna info Creating packages directory
lerna success Initialized Lerna files
The init
command creates a lerna.json
file. This file can be customized, though this tutorial will use the default state. This command also initializes a git repository and creates a package.json
file and a packages/
directory.
In this step, you installed Lerna and initialized your project. Next, you will create the monorepo.
In this step, you will create the monorepo needed to work with Lerna. A monorepo is a repository containing a project (or multiple projects) and multiple packages. The folders and packages created here are necessary for the later stages of the tutorial.
Use the following commands to create the apple/
folder under the packages/
directory, then run npm init
to set up the directory:
- mkdir apple
- cd apple
- npm init
You will see the following output for npm init
within the apple/
directory:
Output...
❯ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items and tries to guess sensible defaults.
See `npm help init` for definitive documentation on these fields
and exactly what they do.
Use `npm install <pkg>` afterward to install a package and
save it as a dependency in the package.json file.
Press ^C at any time to quit.
package name: (apple) (ENTER)
version: (1.0.0) (ENTER)
description: (ENTER)
entry point: (index.js) (ENTER)
test command: (ENTER)
git repository: (ENTER)
keywords: (ENTER)
author: (ENTER)
license: (ISC) (ENTER)
About to write to lerna-demo/packages/apple/package.json:
{
"name": "apple",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC"
}
Is this OK? (yes) y
You can keep the default values. After creating and updating the apple/
folder, follow the same process to make folders for orange/
and banana/
.
Now that you have the npm packages initialized with the npm init
step, you can run the bootstrap
command, which will run npm install
in all your packages. Lerna’s bootstrap
command installs package dependencies and links the packages together. Without bootstrapping your packages, the package dependencies will not satisfy, and you won’t be able to run any npm scripts across the project.
Run the following command to bootstrap your npm packages from your project root folder:
- lerna bootstrap
You’ll see the following output:
Output...
❯ lerna bootstrap
lerna notice cli v4.0.0
lerna info Bootstrapping 3 packages
lerna info Symlinking packages and binaries
lerna success Bootstrapped 3 packages
In this step, you set up the monorepo with three folders and bootstrapped your packages to the directory. Next, you will add dependencies to the packages in your monorepo.
In this step, you will use Lerna to install a sample package in your monorepo. Lerna can help you manage packages and dependencies across projects within the monorepo.
To add a package to your project, use the lerna exec
command to execute a shell command:
- lerna exec npm i lite-server --parallel
You are using lite-server
as an example since it is a lightweight package. You might consider adding other packages with this method, depending on your project requirements.
The --parallel
flag signals the build to run simultaneously, saving time during the compilation process.
The expected output will look like this:
Output...
❯ lerna exec npm i lite-server --parallel
lerna notice cli v4.0.0
lerna info Executing command in 3 packages: "npm i lite-server"
apple: added 179 packages, and audited 180 packages in 12s
apple: 6 packages are looking for funding
apple: run `npm fund` for details
apple: found 0 vulnerabilities
orange: added 179 packages, and audited 180 packages in 12s
orange: 6 packages are looking for funding
orange: run `npm fund` for details
orange: found 0 vulnerabilities
banana: added 179 packages, and audited 180 packages in 12s
banana: 6 packages are looking for funding
banana: run `npm fund` for details
banana: found 0 vulnerabilities
lerna success exec Executed command in 3 packages: "npm i lite-server"
A lerna success
message at the end of the output log indicates that the command ran successfully and that you have installed the lite-server
package in each of the project folders.
In this step, you installed a dependency in all your packages. Next, you will run a script in your packages.
In this step, you will run scripts that customize the build for your packages. You can use lerna run <cmd>
to execute the npm run <cmd>
in all of your packages. The npm run
command runs the specified npm script in the package.json
file.
As an example, run the test
script that comes bundled with the packages that you added in Step 2:
- lerna run test --no-bail
Passing a --no-bail
flag informs Lerna to run the script for all the packages, even if a particular package script has an error.
You will see the following output:
Output...
lerna notice cli v4.0.0
lerna info Executing command in 3 packages: "npm run test"
lerna info run Ran npm script 'test' in 'apple' in 0.2s:
> apple@1.0.0 test
> echo "Error: no test specified" && exit 1
Error: no test specified
lerna info run Ran npm script 'test' in 'banana' in 0.2s:
> banana@1.0.0 test
> echo "Error: no test specified" && exit 1
Error: no test specified
lerna info run Ran npm script 'test' in 'orange' in 0.2s:
> orange@1.0.0 test
> echo "Error: no test specified" && exit 1
Error: no test specified
lerna ERR! Received non-zero exit code 1 during execution
lerna success run Ran npm script 'test' in 3 packages in 0.2s:
lerna success - apple
lerna success - banana
lerna success - orange
With this command, you ran the test
script for your packages. Since you have not defined any tests, you receive a default output of "Error: no test specified"
. You did receive the exit code 1
notice, but passing in the --no-bail
flag means that your scripts run without issue. If you don’t pass in the --no-bail
flag, then Lerna will stop the execution after the step fails for the first package.
In this article, you learned how to manage monorepos with Lerna. You can now use Lerna to automate tasks requiring similar changes across all packages. You also passed special flags like --no-bail
and --parallel
to customize your builds. For more information, visit the official Lerna documentation page.
To make your package globally available, you can publish your packages to an artifactory of your choice. npmjs is a public artifactory where you can push your packages. The lerna publish
command can push all your packages at once. A lerna success
message means you have successfully uploaded your packages to the artifactory.
Most applications depend on data, whether it comes from a database or an API. Fetching data from an API sends a network request to the API server and returns the data as the response. These round trips take time and can increase your application response time to users. Furthermore, most APIs limit the number of requests they can serve an application within a specific time frame, a process known as rate limiting.
To get around these problems, you can cache your data so that the application makes a single request to an API, and all the subsequent data requests will retrieve the data from the cache. Redis, an in-memory database that stores data in the server memory, is a popular tool to cache data. You can connect to Redis in Node.js using the node-redis
module, which gives you methods to retrieve and store data in Redis.
In this tutorial, you’ll build an Express application that retrieves data from a RESTful API using the axios
module. Next, you will modify the app to store the data fetched from the API in Redis using the node-redis
module. After that, you will implement the cache validity period so that the cache can expire after a certain amount of time has passed. Finally, you will use the Express middleware to cache data.
To follow the tutorial, you will need:
Node.js environment setup on your server. If you are on Ubuntu 22.04, install the latest version of Node.js and npm by following option 3 in How To Install Node.js on Ubuntu 22.04. For other operating systems, see the How to Install Node.js and Create a Local Development Environment series.
Redis installed on your server. If you’re using Ubuntu 22.04, follow steps 1 and 2 of How To Install and Secure Redis on Ubuntu 22.04. If you’re working on another operating system, see How to Install and Secure Redis.
Knowledge of asynchronous programming. Follow Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript.
Basic knowledge using the Express web framework. See How To Get Started with Node.js and Express.
In this step, you’ll install the dependencies necessary for this project and start an Express server. In this tutorial, you’ll create a wiki containing information about different kinds of fish. We’ll call the project fish_wiki
.
First, create the directory for the project using the mkdir
command:
- mkdir fish_wiki
Move into the directory:
- cd fish_wiki
Initialize the package.json
file using the npm
command:
- npm init -y
The -y
option accepts all defaults automatically.
When you run the npm init
command, it will create the package.json
file in your directory with the following contents:
OutputWrote to /home/your_username/<^>fish_wiki<^/package.json:
{
"name": "fish_wiki",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Next, you will install the following packages:
express
: a web server framework for Node.js.axios
: a Node.js HTTP client, which is helpful for making API calls.node-redis
: a Redis client that allows you to store and access data in Redis.To install the three packages together, enter the following command:
- npm install express axios redis
After installing the packages, you’ll create a basic Express server.
Using nano
or the text editor of your choice, create and open the server.js
file:
- nano server.js
In your server.js
file, enter the following code to create an Express server:
const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
First, you import express
into the file. In the second line, you set the app
variable as an instance of express
, which gives you access to methods such as get
, post
, listen
, and many more. This tutorial will focus on the get
and listen
methods.
In the following line, you define and assign the port
variable to the port number you want the server to listen on. If no port number is available in an environmental variables file, port 3000
will be used as the default.
Finally, using the app
variable, you invoke the express
module’s listen()
method to start the server on port 3000
.
Save and close the file.
Run the server.js
file using the node
command to start the server:
- node server.js
The console will log a message similar to the following:
OutputApp listening on port 3000
The output confirms that the server is running and ready to serve any requests on port 3000
. Because Node.js does not automatically reload the server when file are changed, you will now stop the server using CTRL+C
so that you can update server.js
in the next step.
Once you have installed the dependencies and created an Express server, you’ll retrieve data from a RESTful API.
In this step, you’ll build upon the Express server from the previous step to retrieve data from a RESTful API without implementing caching, demonstrating what happens when data is not stored in a cache.
To begin, open the server.js
file in your text editor:
- nano server.js
Next, you will retrieve data from the FishWatch API. The FishWatch API returns information about fish species.
In your server.js
file, define a function that requests API data with the following highlighted code:
const express = require("express");
const axios = require("axios");
const app = express();
const port = process.env.PORT || 3000;
async function fetchApiData(species) {
const apiResponse = await axios.get(
`https://www.fishwatch.gov/api/species/${species}`
);
console.log("Request sent to the API");
return apiResponse.data;
}
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
In the second line, you import the axios
module. Next, you define an asynchronous function fetchApiData()
, which takes species
as a parameter. To make the function asynchronous, you prefix it with the async
keyword.
Within the function, you call the axios
module’s get()
method with the API endpoint you want the method to retrieve the data from, which is the FishWatch API in this example. Since the get()
method implements a promise, you prefix it with the await
keyword to resolve the promise. Once the promise is resolved and data is returned from the API, you call the console.log()
method. The console.log()
method will log a message saying that a request has been sent to the API. Finally, you return the data from the API.
Next, you will define an Express route that accepts GET
requests. In your server.js
file, define the route with the following code:
...
app.get("/fish/:species", getSpeciesData);
app.listen(port, () => {
...
});
In the preceding code block, you invoke the express
module’s get()
method, which only listens on GET
requests. The method takes two arguments:
/fish/:species
: the endpoint that Express will be listening on. The endpoint takes a route parameter :species
that captures anything entered on that position in the URL.getSpeciesData()
(not yet defined): a callback function that will be called when the URL matches the endpoint specified in the first argument.Now that the route is defined, specify the getSpeciesData
callback function:
...
async function getSpeciesData(req, res) {
}
app.get("/fish/:species", getSpeciesData);
...
The getSpeciesData
function is an asynchronous handler function passed to the express
module’s get()
method as a second argument. The getSpeciesData()
function takes two arguments: a request object and a response object. The request object contains information about the client, while the response object contains the information that will be sent to the client from Express.
Next, add the highlighted code to call fetchApiData()
to retrieve data from an API in the getSpeciesData()
callback function:
...
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
results = await fetchApiData(species);
}
...
In the function, you extract the value captured from the endpoint stored in the req.params
object, then assign it to the species
variable. In the next line, you define the variable results
and set it to undefined
.
After that, you invoke the fetchApiData()
function with the species
variable as an argument. The fetchApiData()
function call is prefixed with the await
syntax because it returns a promise. When the promise resolves, it returns the data, which is then assigned to the results
variable.
Next, add the highlighted code to handle runtime errors:
...
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
try {
results = await fetchApiData(species);
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
...
You define the try/catch
block to handle runtime errors. In the try
block, you call fetchApiData()
to retrieve data from an API.
If an error is encountered, the catch
block logs the error and returns a 404
status code with a “Data unavailable” response.
Most APIs return a 404 status code when they have no data for a specific query, which automatically triggers the catch
block to execute. However, the FishWatch API returns a 200 status code with an empty array when there is no data for that specific query. A 200 status code means the request was successful, so the catch()
block is never triggered.
To trigger the catch()
block, you need to check if the array is empty and throw an error when the if
condition evaluates to true. When the if
conditions evaluate to false, you can send a response to the client containing the data.
To do that, add the highlighted code:
...
async function getSpeciesData(req, res) {
...
try {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
res.send({
fromCache: false,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
...
Once the data is returned from the API, the if
statement checks if the results
variable is empty. If the condition is met, you use the throw
statement to throw a custom error with the message API returned an empty array
. After it runs, execution switches to the catch
block, which logs the error message and returns a 404 response.
Conversely, if the results
variable has data, the if
statement condition will not be met. As a result, the program will skip the if
block and execute the response object’s send
method, which sends a response to the client.
The send
method takes an object that has the following properties:
fromCache
: the property accepts a value that helps you know whether data is coming from the Redis cache or the API. You now assigned a false
value because the data comes from an API.
data
: the property is assigned the results
variable that contains the data returned from the API.
At this point, your complete code will look like this:
const express = require("express");
const axios = require("axios");
const app = express();
const port = process.env.PORT || 3000;
async function fetchApiData(species) {
const apiResponse = await axios.get(
`https://www.fishwatch.gov/api/species/${species}`
);
console.log("Request sent to the API");
return apiResponse.data;
}
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
try {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
res.send({
fromCache: false,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
app.get("/fish/:species", getSpeciesData);
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Now that everything is in place, save and exit your file.
Start the express server:
- node server.js
The Fishwatch API accepts many species, but we will use only the red-snapper
fish species as a route parameter on the endpoint you will be testing throughout this tutorial.
Now launch your favorite web browser on your local computer. Navigate to the http://localhost:3000/fish/red-snapper
URL.
Note: If you are following the tutorial on a remote server, you can view the app in your browser using port forwarding.
With the Node.js server still running, open another terminal on your local computer, then enter the following command:
- ssh -L 3000:localhost:3000 your-non-root-user@yourserver-ip
Upon connecting to the server, navigate to http://localhost:3000/fish/red-snapper
on your local machine web browser.
Once the page loads, you should see fromCache
set to false
.
Now, refresh the URL three more times and look at your terminal. The terminal will log “Request sent to the API” as many times as you have refreshed your browser.
If you refreshed the URL three times after the initial visit, your output will look like this:
Output
App listening on port 3000
Request sent to the API
Request sent to the API
Request sent to the API
Request sent to the API
This output shows that a network request is sent to the API server every time you refresh the browser. If you had an application with 1000 users hitting the same endpoint, that’s 1000 network requests sent to the API.
When you implement caching, the request to the API will only be done once. All subsequent requests will get data from the cache, boosting your application performance.
For now, stop your Express server with CTRL+C
.
Now that you can request data from an API and serve it to users, you’ll cache data returned from an API in Redis.
In this section, you’ll cache data from the API so that only the initial visit to your app endpoint will request data from an API server, and all the following requests will fetch data from the Redis cache.
Open the server.js
file:
- nano server.js
In your server.js
file, import the node-redis
module:
const express = require("express");
const axios = require("axios");
const redis = require("redis");
...
In the same file, connect to Redis using the node-redis
module by adding the highlighted code:
const express = require("express");
const axios = require("axios");
const redis = require("redis");
const app = express();
const port = process.env.PORT || 3000;
let redisClient;
(async () => {
redisClient = redis.createClient();
redisClient.on("error", (error) => console.error(`Error : ${error}`));
await redisClient.connect();
})();
async function fetchApiData(species) {
...
}
...
First, you define the redisClient
variable with the value set to undefined. After that, you define an anonymous self-invoked asynchronous function, which is a function that runs immediately after defining it. You define an anonymous self-invoked asynchronous function by enclosing a nameless function definition in parenthesis (async () => {...})
. To make it self-invoked, you immediately follow it with another set of parenthesis ()
, which ends up looking like (async () => {...})()
.
Within the function, you invoke the redis
module’s createClient()
method that creates a redis
object. Since you did not provide the port for Redis to use when you invoked the createClient()
method, Redis will use port 6379
, the default port.
You also call the Node.js on()
method that registers events on the Redis object. The on()
method takes two arguments: error
and a callback. The first argument error
is an event triggered when Redis encounters an error. The second argument is a callback that runs when the error
event is emitted. The callback logs the error in the console.
Finally, you call the connect()
method, which starts the connection with Redis on the default port 6379
. The connect()
method returns a promise, so you prefix it with the await
syntax to resolve it.
Now that your application is connected to Redis, you’ll modify the getSpeciesData()
callback to store data in Redis on the initial visit and retrieve the data from the cache for all the requests that follow.
In your server.js
file, add and update the highlighted code:
...
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
let isCached = false;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
isCached = true;
results = JSON.parse(cacheResults);
} else {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
}
res.send({
fromCache: isCached,
data: results,
});
} catch (error) {
...
}
}
...
In the getSpeciesData
function, you define the isCached
variable with the value false
. Within the try
block, you call the node-redis
module’s get()
method with species
as the argument. When the method finds the key in Redis that matches the species
variable value, it returns the data, which is then assigned to the cacheResults
variable.
Next, an if
statement checks if the cacheResults
variable has data. If the condition is met, the isCache
variable is assigned true
. Following this, you invoke the parse()
method of the JSON
object with cacheResults
as the argument. The parse()
method converts JSON string data into a JavaScript object. After the JSON has been parsed, you invoke the send()
method, which takes an object that has the fromCache
property set to the isCached
variable. The method sends the response to the client.
If the get()
method of the node-redis
module finds no data in the cache, the cacheResults
variable is set to null
. As a result, the if
statement evaluates to false. When that happens, execution skips to the else
block where you call the fetchApiData()
function to fetch data from the API. However, once the data is returned from the API, it is not saved in Redis.
To store the data in the Redis cache, you need to use the node-redis
module’s set()
method to save it. To do that, add the highlighted line:
...
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
let isCached = false;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
isCached = true;
results = JSON.parse(cacheResults);
} else {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
await redisClient.set(species, JSON.stringify(results));
}
res.send({
fromCache: isCached,
data: results,
});
} catch (error) {
...
}
}
...
Within the else
block, once the data has been fetched, you call the node-redis
module’s set()
method to save the data in Redis under the key name of the value in the species
variable.
The set()
method takes two arguments, which are key-value pairs: species
and JSON.stringify(results)
.
The first argument, species
, is the key that the data will be saved under in Redis. Remember species
is set to the value passed to the endpoint you defined. For example, when you visit /fish/red-snapper
, species
is set to red-snapper
, which will be the key in Redis.
The second argument, JSON.stringify(results)
, is the value for the key. In the second argument, you invoke the JSON
’s stringify()
method with the results
variable as the argument, which contains data returned from the API. The method converts JSON into a string; this is why, when you retrieved data from the cache using the node-redis
module’s get()
method earlier, you invoked the JSON.parse
method with the cacheResults
variable as the argument.
Your complete file will now look like the following:
const express = require("express");
const axios = require("axios");
const redis = require("redis");
const app = express();
const port = process.env.PORT || 3000;
let redisClient;
(async () => {
redisClient = redis.createClient();
redisClient.on("error", (error) => console.error(`Error : ${error}`));
await redisClient.connect();
})();
async function fetchApiData(species) {
const apiResponse = await axios.get(
`https://www.fishwatch.gov/api/species/${species}`
);
console.log("Request sent to the API");
return apiResponse.data;
}
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
let isCached = false;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
isCached = true;
results = JSON.parse(cacheResults);
} else {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
await redisClient.set(species, JSON.stringify(results));
}
res.send({
fromCache: isCached,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
app.get("/fish/:species", getSpeciesData);
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Save and exit your file, and run the server.js
using the node
command:
- node server.js
Once the server has started, refresh http://localhost:3000/fish/red-snapper
in your browser.
Notice that fromCache
is still set to false
:
Now refresh the page again to see that this time fromCache
is set to true
:
Refresh the page five times and go back to the terminal. Your output will look similar to the following:
OutputApp listening on port 3000
Request sent to the API
Now, Request sent to the API
has only been logged once after multiple URL refreshes, contrasting with the last section where the message was logged for each refresh. This output confirms that only one request was sent to the server and that subsequently, data is fetched from Redis.
To further confirm that the data is stored in Redis, stop your server using CTRL+C
. Connect to the Redis server client with the following command:
- redis-cli
Retrieve the data under the key red-snapper
:
- get red-snapper
Your output will resemble the following (edited for brevity):
Output"[{\"Fishery Management\":\"<ul>\\n<li><a...3\"}]"
The output shows the stringified version of JSON data that the API returns when you visit the /fish/red-snapper
endpoint, which confirms that the API data is stored in the Redis cache.
Exit the Redis Server client:
- exit
Now that you can cache data from an API, you can also set the cache validity.
When caching data, you need to know how often the data changes. Some API data changes in minutes; others in hours, weeks, months, or years. Setting a suitable cache duration ensures that your application serves up-to-date data to your users.
In this step, you’ll set the cache validity for the API data that needs to be stored in Redis. When the cache expires, your application will send a request to the API to retrieve recent data.
You need to consult your API documentation to set the correct expiry time for the cache. Most documentation will mention how frequently the data is updated. However, there are some cases where the documentation doesn’t provide the information, so you might have to guess. Checking the last_updated
property of various API endpoints can show how frequently the data is updated.
Once you choose the cache duration, you need to convert it into seconds. For demonstration in this tutorial, you will set the cache duration to 3 minutes or 180 seconds. This sample duration will make testing the cache duration functionality easier.
To implement the cache validity duration, open the server.js
file:
- nano server.js
Add the highlighted code:
const express = require("express");
const axios = require("axios");
const redis = require("redis");
const app = express();
const port = process.env.PORT || 3000;
let redisClient;
(async () => {
...
})();
async function fetchApiData(species) {
...
}
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
let isCached = false;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
isCached = true;
results = JSON.parse(cacheResults);
} else {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
await redisClient.set(species, JSON.stringify(results), {
EX: 180,
NX: true,
});
}
res.send({
fromCache: isCached,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
app.get("/fish/:species", getSpeciesData);
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
In the node-redis
module’s set()
method, you pass a third argument of an object with the following properties:
EX
: accepts a value with the cache duration in seconds.NX
: when set to true
, it ensures that the set()
method should only set a key that doesn’t already exist in Redis.Save and exit your file.
Go back to the Redis server client to test the cache validity:
- redis-cli
Delete the red-snapper
key in Redis:
- del red-snapper
Exit the Redis client:
- exit
Now, start the development server with the node
command:
- node server.js
Switch back to your browser and refresh the http://localhost:3000/fish/red-snapper
URL. For the next three minutes, if you refresh the URL, the output in the terminal should be consistent with the following output:
OutputApp listening on port 3000
Request sent to the API
After three minutes have passed, refresh the URL in your browser. In the terminal, you should see that “Request sent to the API” has been logged twice.
OutputApp listening on port 3000
Request sent to the API
Request sent to the API
This output shows that the cache expired, and a request to the API was made again.
You can stop the Express server.
Now that you can set the cache validity, you’ll cache data using middleware next.
In this step, you’ll use the Express middleware to cache data. Middleware is a function that can access the request object, response object, and a callback that should run after it executes. The function that runs after the middleware also has access to the request and response object. When using middleware, you can modify request and response objects or return a response to the user earlier.
To use middleware in your application for caching, you will modify the getSpeciesData()
handler function to fetch data from an API and store it in Redis. You’ll move all the code that looks for data in Redis to the cacheData
middleware function.
When you visit the /fish/:species
endpoint, the middleware function will run first to search for data in the cache; if found, it will return a response, and the getSpeciesData
function won’t run. However, if the middleware does not find the data in the cache, it will call the getSpeciesData
function to fetch data from API and store it in Redis.
First, open your server.js
:
- nano server.js
Next, remove the highlighted code:
...
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
let isCached = false;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
isCached = true;
results = JSON.parse(cacheResults);
} else {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
await redisClient.set(species, JSON.stringify(results), {
EX: 180,
NX: true,
});
}
res.send({
fromCache: isCached,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
...
In the getSpeciesData()
function, you remove all the code that looks for data stored in Redis. You also remove the isCached
variable since the function getSpeciesData()
function will only fetch data from the API and store it in Redis.
Once the code has been removed, set fromCache
to false
as highlighted below, so the getSpeciesData()
function will look like the following:
...
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
try {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
await redisClient.set(species, JSON.stringify(results), {
EX: 180,
NX: true,
});
res.send({
fromCache: false,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
...
The getSpeciesData()
function retrieves the data from API, stores it in the cache, and returns a response to the user.
Next, add the following code to define the middleware function for caching data in Redis:
...
async function cacheData(req, res, next) {
const species = req.params.species;
let results;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
results = JSON.parse(cacheResults);
res.send({
fromCache: true,
data: results,
});
} else {
next();
}
} catch (error) {
console.error(error);
res.status(404);
}
}
async function getSpeciesData(req, res) {
...
}
...
The cacheData()
middleware function takes three arguments: req
, res
, and next
. In the try
block, the function checks if the value in the species
variable has data stored in Redis under its key. If the data is in Redis, it is returned and set to the cacheResults
variable.
Next, the if
statement checks if cacheResults
has data. The data is saved in the results
variable if it evaluates to true. After that, the middleware uses the send()
method to return an object with the properties fromCache
set to true
and data
set to the results
variable.
However, if the if
statement evaluates to false, execution switches to the else
block. Within the else
block, you call next()
, which passes control to the next function that should execute after it.
To make the cacheData()
middleware pass control to the getSpeciesData()
function when next()
is invoked, update the express
module’s get()
method accordingly:
...
app.get("/fish/:species", cacheData, getSpeciesData);
...
The get()
method now takes cacheData
as its second argument, which is the middleware that looks for data cached in Redis and returns a response when found.
Now, when you visit the /fish/:species
endpoint, cacheData()
executes first. If data is cached, it will return the response, and the request-response cycle ends here. However, if no data is found in the cache, the getSpeciesData()
will be called to retrieve data from API, store it in the cache, and return a response.
The complete file will now look like this:
const express = require("express");
const axios = require("axios");
const redis = require("redis");
const app = express();
const port = process.env.PORT || 3000;
let redisClient;
(async () => {
redisClient = redis.createClient();
redisClient.on("error", (error) => console.error(`Error : ${error}`));
await redisClient.connect();
})();
async function fetchApiData(species) {
const apiResponse = await axios.get(
`https://www.fishwatch.gov/api/species/${species}`
);
console.log("Request sent to the API");
return apiResponse.data;
}
async function cacheData(req, res, next) {
const species = req.params.species;
let results;
try {
const cacheResults = await redisClient.get(species);
if (cacheResults) {
results = JSON.parse(cacheResults);
res.send({
fromCache: true,
data: results,
});
} else {
next();
}
} catch (error) {
console.error(error);
res.status(404);
}
}
async function getSpeciesData(req, res) {
const species = req.params.species;
let results;
try {
results = await fetchApiData(species);
if (results.length === 0) {
throw "API returned an empty array";
}
await redisClient.set(species, JSON.stringify(results), {
EX: 180,
NX: true,
});
res.send({
fromCache: false,
data: results,
});
} catch (error) {
console.error(error);
res.status(404).send("Data unavailable");
}
}
app.get("/fish/:species", cacheData, getSpeciesData);
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Save and exit your file.
To test the caching properly, you can delete the red-snapper
key in Redis. To do that, go into the Redis client:
- redis-cli
Remove the red-snapper
key:
- del red-snapper
Exit the Redis client:
- exit
Now, run the server.js
file:
- node server.js
Once the server starts, go back to the browser and visit the http://localhost:3000/fish/red-snapper
again. Refresh it multiple times.
The terminal will log the message that a request was sent to the API. The cacheData()
middleware will serve all requests for the next three minutes. Your output will look similar to this if you randomly refresh the URL in a four-minute timespan:
OutputApp listening on port 3000
Request sent to the API
Request sent to the API
The behavior is consistent with how the application worked in the previous section.
You can now cache data in Redis using middleware.
In this article, you built an application that fetches data from an API and returns the data as a response to the client. You then modified the app to cache the API response in Redis on the initial visit and serve the data from the cache for all subsequent requests. You modified that cache duration to expire after a certain amount of time has passed, and then you used middleware to handle the cache data retrieval.
As a next step, you can explore the Node Redis documentation to learn more about the features available in the node-redis
module. You can also read the Axios and Express documentation for a deeper look into the topics covered in this tutorial.
To continue building your Node.js skill, see How To Code in Node.js series.
]]>In An Introduction to GraphQL, you learned that GraphQL is an open-source query language and runtime for APIs created to solve issues that are often experienced with traditional REST API systems.
A good way to begin understanding how all the pieces of GraphQL fit together is to make a GraphQL API server. Although Apollo GraphQL is a popular commercial GraphQL implementation favored by many large companies, it is not a prerequisite for making your own GraphQL API server.
In this tutorial, you will make an Express API server in Node.js that serves up a GraphQL endpoint. You will also build a GraphQL schema based on the GraphQL type system, including operations, such as queries and mutations, and resolver functions to generate responses for any requests. You will also use the GraphiQL integrated development environment (IDE) to explore and debug your schema and query the GraphQL API from a client.
To follow this tutorial, you will need:
The first step is to set up an Express server, which you can do before writing any GraphQL code.
In a new project, you will install express
and cors
with the npm install
command:
npm install express cors
Express will be the framework for your server. It is a web application framework for Node.js designed for building APIs. The CORS package, which is Cross-Origin Resource Sharing middleware, will allow you to easily access this server from a browser.
You can also install Nodemon as a dev dependency:
npm install -D nodemon
Nodemon is a tool that helps develop Node-based applications by automatically restarting the application when file changes in the directory are detected.
Installing these packages will have created node_modules
and package.json
with two dependencies and one dev dependency listed.
Using nano
or your favorite text editor, open package.json
for editing, which will look something like this:
{
"dependencies": {
"cors": "^2.8.5",
"express": "^4.17.3"
},
"devDependencies": {
"nodemon": "^2.0.15"
}
}
There are a few more fields you will add at this point. To package.json
, make the following highlighted changes:
{
"main": "server.js",
"scripts": {
"dev": "nodemon server.js"
},
"dependencies": {
"cors": "^2.8.5",
"express": "^4.17.3"
},
"devDependencies": {
"nodemon": "^2.0.15"
},
"type": "module"
}
You will be creating a file for the server at server.js
, so you make main
point to server.js
. This will ensure that npm start
starts the server.
To make it easier to develop on the server, you also create a script called "dev"
that will run nodemon server.js
.
Finally, you add a type
of module
to ensure you can use import
statements throughout the code instead of using the default CommonJS require
.
Save and close the file when you’re done.
Next, create a file called server.js
. In it, you will create a simple Express server, listen on port 4000
, and send a request saying Hello, GraphQL!
. To set this up, add the following lines to your new file:
import express from 'express'
import cors from 'cors'
const app = express()
const port = 4000
app.use(cors())
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
app.get('/', (request, response) => {
response.send('Hello, GraphQL!')
})
app.listen(port, () => {
console.log(`Running a server at http://localhost:${port}`)
})
This code block creates a basic HTTP server with Express. By invoking the express
function, you create an Express application. After setting up a few essential settings for CORS and JSON, you will define what should be sent with a GET
request to the root (/
) using app.get('/')
. Finally, use app.listen()
to define the port the API server should be listening on.
Save and close the file when you’re done.
Now you can run the command to start the Node server:
npm run dev
If you visit http://localhost:4000
in a browser or run a curl http://localhost:4000
command, you will see it return Hello, GraphQL!
, indicating that the Express server is running. At this point, you can begin adding code to serve up a GraphQL endpoint.
In this section, you will begin integrating the GraphQL schema into the basic Express server. You will do so by defining a schema, resolvers, and connecting to a data store.
To begin integrating GraphQL into the Express server, you will install three packages: graphql
, express-graphql
, and @graphql-tools/schema
. Run the following command:
npm install graphql@14 express-graphql @graphql-tools/schema
graphql
: the JavaScript reference implementation for GraphQL.express-graphql
: HTTP server middleware for GraphQL.@graphql-tools/schema
: a set of utilities for faster GraphQL development.You can import these packages in the server.js
file by adding the highlighted lines:
import express from 'express'
import cors from 'cors'
import { graphqlHTTP } from 'express-graphql'
import { makeExecutableSchema } from '@graphql-tools/schema'
...
The next step is to create an executable GraphQL schema.
To avoid the overhead of setting up a database, you can use an in-memory store for the data the GraphQL server will query. You can create a data
object with the values your database would have. Add the highlighted lines to your file:
import express from 'express'
import cors from 'cors'
import { graphqlHTTP } from 'express-graphql'
import { makeExecutableSchema } from '@graphql-tools/schema'
const data = {
warriors: [
{ id: '001', name: 'Jaime' },
{ id: '002', name: 'Jorah' },
],
}
...
The data structure here represents a database table called warriors
that has two rows, represented by the Jaime
and Jorah
entries.
Note: Using a real data store is outside of the scope of this tutorial. Accessing and manipulating data in a GraphQL server is performed through the reducers. This can be done by manually connecting to the database, through an ORM like Prisma. Asynchronous resolvers make this possible through the context
of a resolver. For the rest of this tutorial, we will use the data
variable to represent datastore values.
With your packages installed and some data in place, you will now create a schema, which defines the API by describing the data available to be queried.
Now that you have some basic data, you can begin making a rudimentary schema for an API to get the minimum amount of code necessary to begin using a GraphQL endpoint. This schema is intended to replicate something that might be used for a fantasy RPG game, in which there are characters who have roles such as warriors, wizards, and healers. This example is meant to be open-ended so you can add as much or as little as you want, such as spells and weapons.
A GraphQL schema relies on a type system. There are some built-in types, and you can also create your own type. For this example, you will create a new type
called Warrior
, and give it two fields: id
and name
.
type Warrior {
id: ID!
name: String!
}
The id
has an ID
type, and the name
has a String
type. These are both built-in scalars, or primitive types. The exclamation point (!
) means the field is non-nullable, and a value will be required for any instance of this type.
The only additional piece of information you need to get started is a base Query
type, which is the entry point to the GraphQL query. We will define warriors
as an array of Warrior
types.
type Query {
warriors: [Warrior]
}
With these two types, you have a valid schema that can be used in the GraphQL HTTP middleware. Ultimately, the schema you define here will be passed into the makeExecutableSchema
function provided by graphql-tools
as typeDefs
. The two properties passed into an object on the makeExecutableSchema
function will be as follows:
typeDefs
: a GraphQL schema language string.resolvers
: functions that are called to execute a field and produce a value.In server.js
, after importing the dependencies, create a typeDefs
variable and assign the GraphQL schema as a string, as shown here:
...
const data = {
warriors: [
{ id: '001', name: 'Jaime' },
{ id: '002', name: 'Jorah' },
],
}
const typeDefs = `
type Warrior {
id: ID!
name: String!
}
type Query {
warriors: [Warrior]
}
`
...
Now you have your data set as well as your schema defined, as data
and typeDefs
, respectively. Next, you’ll create resolvers so the API knows what to do with incoming requests.
Resolvers are a collection of functions that generate a response for the GraphQL server. Each resolver function has four parameters:
obj
: The parent object, which is not necessary to use here since it is already the root, or top-level object.args
: Any GraphQL arguments provided to the field.context
: State shared between all resolvers, often a database connection.info
: Additional information.In this case, you will make a resolver for the root Query
type and return a value for warriors
.
To get started with this example server, pass the in-memory data store from earlier in this section by adding the highlighted lines to server.js
:
...
const typeDefs = `
type Warrior {
id: ID!
name: String!
}
type Query {
warriors: [Warrior]
}
`
const resolvers = {
Query: {
warriors: (obj, args, context, info) => context.warriors,
},
}
...
The entry point into the GraphQL server will be through the root Query
type on the resolvers. You have now added one resolver function, called warriors
, which will return warriors
from context
. context
is where your database entry point will be contained, and for this specific implementation, it will be the data
variable that contains your in-memory data store.
Each individual resolver function has four parameters: obj
, args
, context
, and info
. The most useful and relevant parameter to our schema right now is context
, which is an object shared by the resolvers. It is often used as the connection between the GraphQL server and a database.
Finally, with the typeDefs
and resolvers
all set, you have enough information to create an executable schema. Add the highlighted lines to your file:
...
const resolvers = {
Query: {
warriors: (obj, args, context, info) => context.warriors,
},
}
const executableSchema = makeExecutableSchema({
typeDefs,
resolvers,
})
...
The makeExecutableSchema function creates a complete schema that you can pass into the GraphQL endpoint.
Now replace the default root endpoint that is currently returning Hello, GraphQL!
with the following /graphql
endpoint by adding the highlighted lines:
...
const executableSchema = makeExecutableSchema({
typeDefs,
resolvers,
})
app.use(
'/graphql',
graphqlHTTP({
schema: executableSchema,
context: data,
graphiql: true,
})
)
...
The convention is that a GraphQL server will use the /graphql
endpoint. Using the graphqlHTTP
middleware requires passing in the schema and a context, which in this case, is your mock data store.
You now have everything necessary to begin serving the endpoint. Your server.js
code should look like this:
import express from 'express'
import cors from 'cors'
import { graphqlHTTP } from 'express-graphql'
import { makeExecutableSchema } from '@graphql-tools/schema'
const app = express()
const port = 4000
// In-memory data store
const data = {
warriors: [
{ id: '001', name: 'Jaime' },
{ id: '002', name: 'Jorah' },
],
}
// Schema
const typeDefs = `
type Warrior {
id: ID!
name: String!
}
type Query {
warriors: [Warrior]
}
`
// Resolver for warriors
const resolvers = {
Query: {
warriors: (obj, args, context) => context.warriors,
},
}
const executableSchema = makeExecutableSchema({
typeDefs,
resolvers,
})
app.use(cors())
app.use(express.json())
app.use(express.urlencoded({ extended: true }))
// Entrypoint
app.use(
'/graphql',
graphqlHTTP({
schema: executableSchema,
context: data,
graphiql: true,
})
)
app.listen(port, () => {
console.log(`Running a server at http://localhost:${port}`)
})
Save and close the file when you’re done.
Now you should be able to go to http://localhost:4000/graphql
and explore your schema using the GraphiQL IDE.
Your GraphQL API is now complete based on the schema and resolvers you created in this section. In the next section, you’ll use the GraphiQL IDE to help you debug and understand your schema.
Since you applied the graphiql
option as true
to the GraphQL middleware, you have access to the GraphiQL integrated development environment (IDE). If you visited the GraphQL endpoint in a browser window, you’ll find yourself in GraphiQL.
GraphiQL is an in-browser tool for writing, validating, and testing GraphQL queries. Now you can test out your GraphQL server to ensure it’s returning the correct data.
Make a query for warriors
, requesting the id
and name
properties. In your browser, add the following lines to the left pane of GraphiQL:
{
warriors {
id
name
}
}
Submit the query by pressing the Play arrow on the top left, and you should see the return value in JSON on the right-hand side:
Output{
"data": {
"warriors": [
{ "id": "001", "name": "Jaime" },
{ "id": "002", "name": "Jorah" }
]
}
}
If you remove one of the fields in the query, you will see the return value change accordingly. For example, if you only want to retrieve the name
field, you can write the query like this:
{
warriors {
name
}
}
And now your response will look like this:
Output{
"data": {
"warriors": [{ "name": "Jaime" }, { "name": "Jorah" }]
}
}
The ability to query only the fields you need is one of the powerful aspects of GraphQL and is what makes it a client-driven language.
Back in GraphiQL, if you click on Docs all the way to the right, it will expand a sidebar labeled Documentation Explorer. From that sidebar, you can click through the documentation to view your schema in more detail.
Now your API is complete and you’ve explored how to use it from GraphiQL. The next step will be to make actual requests from a client to your GraphQL API.
Just like with REST APIs, a client can communicate with a GraphQL API by making HTTP requests over the network. Since you can use built-in browser APIs like fetch
to make network requests, you can also use fetch
to query GraphQL.
For a very basic example, create an HTML skeleton in an index.html
file with a <pre>
tag:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>GraphQL Client</title>
</head>
<pre><!-- data will be displayed here --></pre>
<body>
<script>
// Add query here
</script>
</body>
</html>
In the script
tag, make an asynchronous function that sends a POST
request to the GraphQL API:
...
<body>
<script>
async function queryGraphQLServer() {
const response = await fetch('http://localhost:4000/graphql', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
query: '{ warriors { name } }',
}),
})
const data = await response.json()
// Append data to the pre tag
const pre = document.querySelector('pre')
pre.textContent = JSON.stringify(data, null, 2) // Pretty-print the JSON
}
queryGraphQLServer()
</script>
</body>
...
The Content-Type
header must be set to application/json
, and the query must be passed in the body as a string. The script will call the function to make the request, and set the response in the pre
tag.
Here is the full index.html
code.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>GraphQL</title>
</head>
<pre></pre>
<body>
<script>
async function queryGraphQLServer() {
const response = await fetch('http://localhost:4000/graphql', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
query: '{ warriors { name } }',
}),
})
const data = await response.json()
const pre = document.querySelector('pre')
pre.textContent = JSON.stringify(data, null, 2) // Pretty-print the JSON
}
queryGraphQLServer()
</script>
</body>
</html>
Save and close the file when you’re done.
Now when you view the index.html
file in a browser, you will see an outgoing network request to the http://localhost:4000/graphql
endpoint, which will return a 200
with the data. You can view this network request by opening Developer Tools and navigating to the Network tab.
If your request went through and you got a 200
response with the data from the GraphQL API, congratulations! You made your first GraphQL API server.
In this tutorial, you made a GraphQL API server using the Express framework in Node.js. The GraphQL server consists of a single /graphql
endpoint that can handle incoming requests to query the data store. Your API had a schema with the base Query
type, a custom Warrior
type, and a resolver to fetch the proper data for those types.
Hopefully, this article helped demystify GraphQL and opens up new ideas and possibilities of what can be accomplished with GraphQL. Many tools exist that can help with the more complex aspects of working with GraphQL, such as authentication, security, and caching, but learning how to set up an API server in the simplest way possible should help you understand the essentials of GraphQL.
This tutorial is part of our How To Manage Data with GraphQL series, which covers the basics of using GraphQL.
]]>Because of such features as its speedy Input/Output (I/O) performance and its basis in the well-known JavaScript language, Node.js has quickly become a popular runtime environment for back-end web development. But as interest grows, larger applications are built, and managing the complexity of the codebase and its dependencies becomes more difficult. Node.js organizes this complexity using modules, which are any single JavaScript files containing functions or objects that can be used by other programs or modules. A collection of one or more modules is commonly referred to as a package, and these packages are themselves organized by package managers.
The Node.js Package Manager (npm) is the default and most popular package manager in the Node.js ecosystem, and is primarily used to install and manage external modules in a Node.js project. It is also commonly used to install a wide range of CLI tools and run project scripts. npm tracks the modules installed in a project with the package.json
file, which resides in a project’s directory and contains:
As you create more complex Node.js projects, managing your metadata and dependencies with the package.json
file will provide you with more predictable builds, since all external dependencies are kept the same. The file will keep track of this information automatically; while you may change the file directly to update your project’s metadata, you will seldom need to interact with it directly to manage modules.
In this tutorial, you will manage packages with npm. The first step will be to create and understand the package.json
file. You will then use it to keep track of all the modules you install in your project. Finally, you will list your package dependencies, update your packages, uninstall your packages, and perform an audit to find security flaws in your packages.
To complete this tutorial, you will need:
package.json
FileWe begin this tutorial by setting up the example project—a fictional Node.js locator
module that gets the user’s IP address and returns the country of origin. You will not be coding the module in this tutorial. However, the packages you manage would be relevant if you were developing it.
First, you will create a package.json
file to store useful metadata about the project and help you manage the project’s dependent Node.js modules. As the suffix suggests, this is a JSON (JavaScript Object Notation) file. JSON is a standard format used for sharing, based on JavaScript objects and consisting of data stored as key-value pairs. If you would like to learn more about JSON, read our Introduction to JSON article.
Since a package.json
file contains numerous properties, it can be cumbersome to create manually, without copy and pasting a template from somewhere else. To make things easier, npm provides the init
command. This is an interactive command that asks you a series of questions and creates a package.json
file based on your answers.
init
CommandFirst, set up a project so you can practice managing modules. In your shell, create a new folder called locator
:
- mkdir locator
Then move into the new folder:
- cd locator
Now, initialize the interactive prompt by entering:
- npm init
Note: If your code will use Git for version control, create the Git repository first and then run npm init
. The command automatically understands that it is in a Git-enabled folder. If a Git remote is set, it automatically fills out the repository
, bugs
, and homepage
fields for your package.json
file. If you initialized the repo after creating the package.json
file, you will have to add this information in yourself. For more on Git version control, see our Introduction to Git: Installation, Usage, and Branches series.
You will receive the following output:
OutputThis utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.
See `npm help init` for definitive documentation on these fields
and exactly what they do.
Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.
Press ^C at any time to quit.
package name: (locator)
You will first be prompted for the name
of your new project. By default, the command assumes it’s the name of the folder you’re in. Default values for each property are shown in parentheses ()
. Since the default value for name
will work for this tutorial, press ENTER
to accept it.
The next value to enter is version
. Along with the name
, this field is required if your project will be shared with others in the npm package repository.
Note: Node.js packages are expected to follow the Semantic Versioning (semver) guide. Therefore, the first number will be the MAJOR
version number that only changes when the API changes. The second number will be the MINOR
version that changes when features are added. The last number will be the PATCH
version that changes when bugs are fixed.
Press ENTER
so the default version of 1.0.0
is accepted.
The next field is description
—a useful string to explain what your Node.js module does. Our fictional locator
project would get the user’s IP address and return the country of origin. A fitting description
would be Finds the country of origin of the incoming request
, so type in something like this and press ENTER
. The description
is very useful when people are searching for your module.
The following prompt will ask you for the entry point
. If someone installs and requires
your module, what you set in the entry point
will be the first part of your program that is loaded. The value needs to be the relative location of a JavaScript file, and will be added to the main
property of the package.json
. Press ENTER
to keep the default value of index.js
.
Note: Most modules have an index.js
file as the main point of entry. This is the default value for a package.json
’s main
property, which is the point of entry for npm modules. If there is no package.json
, Node.js will try to load index.js
by default.
Next, you’ll be asked for a test command
, an executable script or command to run your project tests. In many popular Node.js modules, tests are written and executed with Mocha, Jest, Jasmine, or other test frameworks. Since testing is beyond the scope of this article, leave this option empty for now, and press ENTER
to move on.
The init
command will then ask for the project’s git repository, which may live on a service such as GitHub (for more information, see GitHub’s Repository documentation). You won’t use this in this example, so leave it empty as well.
After the repository prompt, the command asks for keywords
. This property is an array of strings with useful terms that people can use to find your repository. It’s best to have a small set of words that are really relevant to your project, so that searching can be more targeted. List these keywords as a string with commas separating each value. For this sample project, type ip,geo,country
at the prompt. The finished package.json
will have three items in the array for keywords
.
The next field in the prompt is author
. This is useful for users of your module who want to get in contact with you. For example, if someone discovers an exploit in your module, they can use this to report the problem so that you can fix it. The author
field is a string in the following format: "Name \<Email\> (Website)"
. For example, "Sammy \<sammy@your_domain\> (https://your_domain)"
is a valid author. The email and website data are optional—a valid author could just be a name. Add your contact details as an author and confirm with ENTER
.
Finally, you’ll be prompted for the license
. This determines the legal permissions and limitations users will have while using your module. Many Node.js modules are open source, so npm sets the default to ISC.
At this point, you would review your licensing options and decide what’s best for your project. For more information on different types of open source licenses, see this license list from the Open Source Initiative. If you do not want to provide a license for a private repository, you can type UNLICENSED
at the prompt. For this sample, use the default ISC license, and press ENTER
to finish this process.
The init
command will now display the package.json
file it’s going to create. It will look similar to this:
OutputAbout to write to /home/sammy/locator/package.json:
{
"name": "locator",
"version": "1.0.0",
"description": "Finds the country of origin of the incoming request",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"ip",
"geo",
"country"
],
"author": "Sammy <sammy@your_domain> (https://your_domain)",
"license": "ISC"
}
Is this OK? (yes)
Once the information matches what you see here, press ENTER
to complete this process and create the package.json
file. With this file, you can keep a record of modules you install for your project.
Now that you have your package.json
file, you can test out installing modules in the next step.
It is common in software development to use external libraries to perform ancillary tasks in projects. This allows the developer to focus on the business logic and create the application more quickly and efficiently by utilizing tools and code that others have written that accomplish tasks one needs.
For example, if our sample locator
module has to make an external API request to get geographical data, we could use an HTTP library to make that task easier. Since our main goal is to return pertinent geographical data to the user, we could install a package that makes HTTP requests easier for us instead of rewriting this code for ourselves, a task that is beyond the scope of our project.
Let’s run through this example. In your locator
application, you will use the axios library, which will help you make HTTP requests. Install it by entering the following in your shell:
- npm install axios --save
You begin this command with npm install
, which will install the package (for brevity you can also use npm i
). You then list the packages that you want installed, separated by a space. In this case, this is axios
. Finally, you end the command with the optional --save
parameter, which specifies that axios
will be saved as a project dependency.
When the library is installed, you will see output similar to the following:
Output...
+ axios@0.27.2
added 5 packages from 8 contributors and audited 5 packages in 0.764s
found 0 vulnerabilities
Now, open the package.json
file, using a text editor of your choice. This tutorial will use nano
:
- nano package.json
You’ll see a new property, as highlighted in the following:
{
"name": "locator",
"version": "1.0.0",
"description": "Finds the country of origin of the incoming request",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"ip",
"geo",
"country"
],
"author": "Sammy sammy@your_domain (https://your_domain)",
"license": "ISC",
"dependencies": {
"axios": "^0.27.2"
}
}
The --save
option told npm
to update the package.json
with the module and version that was just installed. This is great, as other developers working on your projects can easily see what external dependencies are needed.
Note: You may have noticed the ^
before the version number for the axios
dependency. Recall that semantic versioning consists of three digits: MAJOR, MINOR, and PATCH. The ^
symbol signifies that any higher MINOR or PATCH version would satisfy this version constraint. If you see ~
at the beginning of a version number, then only higher PATCH versions satisfy the constraint.
When you are finished reviewing package.json
, close the file. If you used nano to edit the file, you can do so by pressing CTRL + X
and then ENTER
.
Packages that are used for the development of a project but not for building or running it in production are called development dependencies. They are not necessary for your module or application to work in production, but may be helpful while writing the code.
For example, it’s common for developers to use code linters to ensure their code follows best practices and to keep the style consistent. While this is useful for development, this only adds to the size of the distributable without providing a tangible benefit when deployed in production.
Install a linter as a development dependency for your project. Try this out in your shell:
- npm i eslint@8.0.0 --save-dev
In this command, you used the --save-dev
flag. This will save eslint
as a dependency that is only needed for development. Notice also that you added @8.0.0
to your dependency name. When modules are updated, they are tagged with a version. The @
tells npm to look for a specific tag of the module you are installing. Without a specified tag, npm installs the latest tagged version. Open package.json
again:
- nano package.json
This will show the following:
{
"name": "locator",
"version": "1.0.0",
"description": "Finds the country of origin of the incoming request",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"ip",
"geo",
"country"
],
"author": "Sammy sammy@your_domain (https://your_domain)",
"license": "ISC",
"dependencies": {
"axios": "^0.19.0"
},
"devDependencies": {
"eslint": "^8.0.0"
}
}
eslint
has been saved as a devDependencies
, along with the version number you specified earlier. Exit package.json
.
node_modules
and package-lock.json
When you first install a package to a Node.js project, npm
automatically creates the node_modules
folder to store the modules needed for your project and the package-lock.json
file that you examined earlier.
Confirm these are in your working directory. In your shell, type ls
and press ENTER
. You will observe the following output:
Outputnode_modules package.json package-lock.json
The node_modules
folder contains every installed dependency for your project. In most cases, you should not commit this folder into your version controlled repository. As you install more dependencies, the size of this folder will quickly grow. Furthermore, the package-lock.json
file keeps a record of the exact versions installed in a more succinct way, so including node_modules
is not necessary.
While the package.json
file lists dependencies that tell us the suitable versions that should be installed for the project, the package-lock.json
file keeps track of all changes in package.json
or node_modules
and tells us the exact version of the package installed. You usually commit this to your version controlled repository instead of node_modules
, as it’s a cleaner representation of all your dependencies.
With your package.json
and package-lock.json
files, you can quickly set up the same project dependencies before you start development on a new project. To demonstrate this, move up a level in your directory tree and create a new folder named cloned_locator
in the same directory level as locator
:
- cd ..
- mkdir cloned_locator
Move into your new directory:
- cd cloned_locator
Now copy the package.json
and package-lock.json
files from locator
to cloned_locator
:
- cp ../locator/package.json ../locator/package-lock.json .
To install the required modules for this project, type:
- npm i
npm will check for a package-lock.json
file to install the modules. If no lock file is available, it would read from the package.json
file to determine the installations. It is usually quicker to install from package-lock.json
, since the lock file contains the exact version of modules and their dependencies, meaning npm does not have to spend time figuring out a suitable version to install.
When deploying to production, you may want to skip the development dependencies. Recall that development dependencies are stored in the devDependencies
section of package.json
, and have no impact on the running of your app. When installing modules as part of the deployment process to deploy your application, omit the dev dependencies by running:
- npm i --production
The --production
flag ignores the devDependencies
section during installation. For now, stick with your development build.
Before moving to the next section, return to the locator
folder:
- cd ../locator
So far, you have been installing npm modules for the locator
project. npm also allows you to install packages globally. This means that the package is available to your user in the wider system, like any other shell command. This ability is useful for the many Node.js modules that are CLI tools.
For example, you may want to blog about the locator
project that you’re currently working on. To do so, you can use a library like Hexo to create and manage your static website blog. Install the Hexo CLI globally like this:
- npm i hexo-cli -g
To install a package globally, you append the -g
flag to the command.
Note: If you get a permission error trying to install this package globally, your system may require super user privileges to run the command. Try again with sudo npm i hexo-cli -g
.
Test that the package was successfully installed by typing:
- hexo --version
You will see output similar to:
Outputhexo-cli: 4.3.0
os: linux 5.15.0-35-generic Ubuntu 22.04 LTS 22.04 LTS (Jammy Jellyfish)
node: 18.3.0
v8: 10.2.154.4-node.8
uv: 1.43.0
zlib: 1.2.11
brotli: 1.0.9
ares: 1.18.1
modules: 108
nghttp2: 1.47.0
napi: 8
llhttp: 6.0.6
openssl: 3.0.3+quic
cldr: 41.0
icu: 71.1
tz: 2022a
unicode: 14.0
ngtcp2: 0.1.0-DEV
nghttp3: 0.1.0-DEV
So far, you have learned how to install modules with npm. You can install packages to a project locally, either as a production or development dependency. You can also install packages based on pre-existing package.json
or package-lock.json
files, allowing you to develop with the same dependencies as your peers. Finally, you can use the -g
flag to install packages globally, so you can access them regardless of whether you’re in a Node.js project or not.
Now that you can install modules, in the next section you will practice techniques to administer your dependencies.
A complete package manager can do a lot more than install modules. npm has over 20 commands relating to dependency management available. In this step, you will:
While these examples will be done in your locator
folder, all of these commands can be run globally by appending the -g
flag at the end of them, exactly like you did when installing globally.
If you would like to know which modules are installed in a project, it would be easier to use the list
or ls
command instead of reading the package.json
directly. To do this, enter:
- npm ls
You will see output like this:
Output├── axios@0.27.2
└── eslint@8.0.0
The --depth
option allows you to specify what level of the dependency tree you want to see. When it’s 0
, you only see your top level dependencies. If you want to see the entire dependency tree, use the --all
argument:
- npm ls --all
You will see output like the following:
Output├─┬ axios@0.27.2
│ ├── follow-redirects@1.15.1
│ └─┬ form-data@4.0.0
│ ├── asynckit@0.4.0
│ ├─┬ combined-stream@1.0.8
│ │ └── delayed-stream@1.0.0
│ └─┬ mime-types@2.1.35
│ └── mime-db@1.52.0
└─┬ eslint@8.0.0
├─┬ @eslint/eslintrc@1.3.0
│ ├── ajv@6.12.6 deduped
│ ├── debug@4.3.4 deduped
│ ├── espree@9.3.2 deduped
│ ├── globals@13.15.0 deduped
│ ├── ignore@5.2.0
│ ├── import-fresh@3.3.0 deduped
│ ├── js-yaml@4.1.0 deduped
│ ├── minimatch@3.1.2 deduped
│ └── strip-json-comments@3.1.1 deduped
. . .
It is a good practice to keep your npm modules up to date. This improves your likelihood of getting the latest security fixes for a module. Use the outdated
command to check if any modules can be updated:
- npm outdated
You will get output like the following:
OutputPackage Current Wanted Latest Location Depended by
eslint 8.0.0 8.17.0 8.17.0 node_modules/eslint locator
This command first lists the Package
that’s installed and the Current
version. The Wanted
column shows which version satisfies your version requirement in package.json
. The Latest
column shows the most recent version of the module that was published.
The Location
column states where in the dependency tree the package is located. The outdated
command has the --depth
flag like ls
. By default, the depth is 0.
It seems that you can update eslint
to a more recent version. Use the update
or up
command like this:
- npm up eslint
The output of the command will contain the version installed:
Output
removed 7 packages, changed 4 packages, and audited 91 packages in 1s
14 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
To see which version of eslint
that you are using now, you can use npm ls
using the package name as an argument:
- npm ls eslint
The output will resemble the npm ls
command you used before, but include only the eslint
package’s versions:
Output└─┬ eslint@8.17.0
└─┬ eslint-utils@3.0.0
└── eslint@8.17.0 deduped
If you wanted to update all modules at once, then you would enter:
- npm up
The npm uninstall
command can remove modules from your projects. This means the module will no longer be installed in the node_modules
folder, nor will it be seen in your package.json
and package-lock.json
files.
Removing dependencies from a project is a normal activity in the software development lifecycle. A dependency may not solve the problem as advertised, or may not provide a satisfactory development experience. In these cases, it may better to uninstall the dependency and build your own module.
Imagine that axios
does not provide the development experience you would have liked for making HTTP requests. Uninstall axios
with the uninstall
or un
command by entering:
- npm un axios
Your output will be similar to:
Outputremoved 8 packages, and audited 83 packages in 542ms
13 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
It doesn’t explicitly say that axios
was removed. To verify that it was uninstalled, list the dependencies once again:
- npm ls
Now, we only see that eslint
is installed:
Outputlocator@1.0.0 /home/ubuntu/locator
└── eslint@8.17.0
This shows that you have successfully uninstalled the axios
package.
npm provides an audit
command to highlight potential security risks in your dependencies. To see the audit in action, install an outdated version of the request module by running the following:
- npm i request@2.60.0
When you install this outdated version of request
, you’ll notice output similar to the following:
Outputnpm WARN deprecated cryptiles@2.0.5: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and security patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
npm WARN deprecated sntp@1.0.9: This module moved to @hapi/sntp. Please make sure to switch over as this distribution is no longer supported and may contain bugs and critical security issues.
npm WARN deprecated boom@2.10.1: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and security patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
npm WARN deprecated har-validator@1.8.0: this library is no longer supported
npm WARN deprecated hoek@2.16.3: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and security patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
npm WARN deprecated request@2.60.0: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated hawk@3.1.3: This module moved to @hapi/hawk. Please make sure to switch over as this distribution is no longer supported and may contain bugs and critical security issues.
added 56 packages, and audited 139 packages in 4s
13 packages are looking for funding
run `npm fund` for details
9 vulnerabilities (5 moderate, 2 high, 2 critical)
To address all issues, run:
npm audit fix --force
Run `npm audit` for details.
npm is telling you that you have deprecated packages and vulnerabilities in your dependencies. To get more details, audit your entire project with:
- npm audit
The audit
command shows tables of output highlighting security flaws:
Output# npm audit report
bl <1.2.3
Severity: moderate
Remote Memory Exposure in bl - https://github.com/advisories/GHSA-pp7h-53gx-mx7r
fix available via `npm audit fix`
node_modules/bl
request 2.16.0 - 2.86.0
Depends on vulnerable versions of bl
Depends on vulnerable versions of hawk
Depends on vulnerable versions of qs
Depends on vulnerable versions of tunnel-agent
node_modules/request
cryptiles <=4.1.1
Severity: critical
Insufficient Entropy in cryptiles - https://github.com/advisories/GHSA-rq8g-5pc5-wrhr
Depends on vulnerable versions of boom
fix available via `npm audit fix`
node_modules/cryptiles
hawk <=9.0.0
Depends on vulnerable versions of boom
Depends on vulnerable versions of cryptiles
Depends on vulnerable versions of hoek
Depends on vulnerable versions of sntp
node_modules/hawk
. . .
9 vulnerabilities (5 moderate, 2 high, 2 critical)
To address all issues, run:
npm audit fix
You can see the path of the vulnerability, and sometimes npm offers ways for you to fix it. You can run the update command as suggested, or you can run the fix
subcommand of audit
. In your shell, enter:
- npm audit fix
You will see similar output to:
Outputnpm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated uuid@3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
added 19 packages, removed 34 packages, changed 13 packages, and audited 124 packages in 3s
14 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
npm was able to safely update two of the packages, decreasing your vulnerabilities by the same amount. However, you still have three deprecated packages in your dependencies. The audit fix
command does not always fix every problem. Although a version of a module may have a security vulnerability, if you update it to a version with a different API then it could break code higher up in the dependency tree.
You can use the --force
parameter to ensure the vulnerabilities are gone, like this:
- npm audit fix --force
As mentioned before, this is not recommended unless you are sure that it won’t break functionality.
In this tutorial, you went through various exercises to demonstrate how Node.js modules are organized into packages, and how these packages are managed by npm. In a Node.js project, you used npm packages as dependencies by creating and maintaining a package.json
file—a record of your project’s metadata, including what modules you installed. You also used the npm CLI tool to install, update, and remove modules, in addition to listing the dependency tree for your projects and checking and updating modules that are outdated.
In the future, leveraging existing code by using modules will speed up development time, as you don’t have to repeat functionality. You will also be able to create your own npm modules, and these will in turn will be managed by others via npm commands. As for next steps, experiment with what you learned in this tutorial by installing and testing the variety of packages out there. See what the ecosystem provides to make problem solving easier. For example, you could try out TypeScript, a superset of JavaScript, or turn your website into mobile apps with Cordova. If you’d like to learn more about Node.js, see our other Node.js tutorials.
]]>An effective logging solution is crucial to the success of any application. Winston is a versatile logging library and a popular logging solution available for Node.js applications. Winston’s features include support for multiple storage options, log levels, log queries, and a built-in profiler.
In this tutorial, you will use Winston to log a Node/Express application that you’ll create as part of this process. You’ll also see how to combine Winston with Morgan, another popular HTTP request middleware logger for Node.js, to consolidate HTTP request data logs with other information. After completing this tutorial, your Ubuntu server will be running a small Node/Express application, and Winston will be implemented to log errors and messages to a file and the console.
To follow this tutorial, you will need:
An Ubuntu 20.04 server with a sudo non-root user, which you can set up by following the initial server setup.
Node.js installed using the official PPA (personal package archive), which is explained in How To Install Node.js on Ubuntu 20.04, Option 2.
Winston is often used for logging events from web applications built with Node.js. In this step, you will create a simple Node.js web application using the Express framework. You will use express-generator
, a command-line tool, to get a Node/Express web application running quickly.
Because you installed the Node Package Manager during the prerequisites, you can use the npm
command to install express-generator
:
- sudo npm install express-generator -g
The -g
flag installs the package globally, which means it can be used as a command-line tool outside of an existing Node project/module.
With express-generator
installed, you can create your app using the express
command, followed by the name of the directory you want to use for the project:
- express myApp
For this tutorial, the project will be called myApp
.
Note: It is also possible to run the express-generator
tool directly without installing it globally as a system-wide command first. To do so, run this command:
- npx express-generator myApp
The npx
command is a command-runner shipped with the Node Package Manager that makes it easy to run command-line tools from the npm
registry.
During the first run, it will ask you if you agree to download the package:
Need to install the following packages:
express-generator
Ok to proceed? (y)
Answer y
and press ENTER
. Now you can use npx express-generator
in place of express
.
Next, install Nodemon, which will automatically reload the application whenever you make changes. A Node.js application needs to be restarted any time changes are made to the source code for those changes to take effect, so Nodemon will automatically watch for changes and restart the application. Since you want to be able to use nodemon
as a command-line tool, install it with the -g
flag:
- sudo npm install nodemon -g
To finish setting up the application, move to the application directory and install dependencies as follows:
- cd myApp
- npm install
By default, applications created with express-generator
run on port 3000
, so you need to ensure that the firewall does not block the port.
To open port 3000
, run the following command:
- sudo ufw allow 3000
You now have everything you need to start your web application. To do so, run the following command:
- nodemon bin/www
This command starts the application on port 3000
. You can test if it’s working by pointing your browser to http://your_server_ip:3000
. You should see something like this:
At this point, you can start a second SSH session to your server for the remainder of this tutorial, leaving the web application you just started running in the original session. For the rest of this article, the initial SSH session currently running the application will be called Session A. Any commands in Session A will appear on a dark navy background like this:
- nodemon bin/www
You will use the new SSH session for running commands and editing files. This session will be called Session B. Any commands in Session B will appear on a light blue background like this:
- cd ~/myApp
Unless otherwise noted, you will run all remaining commands in Session B.
In this step, you created the basic app. Next, you will customize it.
While the default application created by express-generator
is a good start, you need to customize the application so that it will call the correct logger when needed.
express-generator
includes the Morgan HTTP logging middleware that you will use to log data about all HTTP requests. Since Morgan supports output streams, it pairs nicely with the stream support built into Winston, enabling you to consolidate HTTP request data logs with anything else you choose to log with Winston.
The express-generator
boilerplate uses the variable logger
when referencing the morgan
package. Since you will use morgan
and winston
, which are both logging packages, it can be confusing to call either one of them logger
. To specify which variable you want, you can change the variable declarations by editing the app.js
file.
To open app.js
for editing, use nano
or your favorite text editor:
- nano ~/myApp/app.js
Find the following line near the top of the file:
...
var logger = require('morgan');
...
Change the variable name from logger
to morgan
:
...
var morgan = require('morgan');
...
This update specifies that the declared variable morgan
will call the require()
method linked to the Morgan request logger.
You need to find where else the variable logger
was referenced in the file and change it to morgan
. You will also need to change the log format used by the morgan
package to combined
, which is the standard Apache log format and will include useful information in the logs, such as remote IP address and the user-agent HTTP request header.
To do so, find the following line:
...
app.use(logger('dev'));
...
Update it to the following:
...
app.use(morgan('combined'));
...
These changes will help you understand which logging package is referenced at any given time after integrating the Winston configuration.
When finished, save and close the file.
Now that your app is set up, you can start working with Winston.
In this step, you will install and configure Winston. You will also explore the configuration options available as part of the winston
package and create a logger to log information to a file and the console.
Install winston
with the following command:
- cd ~/myApp
- npm install winston
It’s helpful to keep any support or utility configuration files for your applications in a special directory. Create a config
folder that will contain the winston
configuration:
- mkdir ~/myApp/config
Next, create a folder that will contain your log files:
- mkdir ~/myApp/logs
Finally, install app-root-path
:
- npm install app-root-path --save
The app-root-path
package is useful when specifying paths in Node.js. Though this package is not directly related to Winston, it is helpful when determining paths to files in Node.js. You will use it to specify the location of the Winston log files from the project’s root and to avoid ugly relative path syntax.
Now that the configuration for handling logging is in place, you can define your settings. Create and open ~/myApp/config/winston.js
for editing:
- nano ~/myApp/config/winston.js
The winston.js
file will contain your winston
configuration.
Next, add the following code to require the app-root-path
and winston
packages:
const appRoot = require('app-root-path');
const winston = require('winston');
With these variables in place, you can define the configuration settings for your transports. Transports are a concept introduced by Winston that refers to the storage/output mechanisms used for the logs. Winston comes with four core transports built-in: Console, File, HTTP, and Stream.
You will focus on the console and file transports for this tutorial. The console transport will log information to the console, and the file transport will log information to a specified file. Each transport definition can contain configuration settings, such as file size, log levels, and log format.
Here is a quick summary of the settings you will use for each transport:
level
: level of messages to log.filename
: the file to be used to write log data to.handleExceptions
: catch and log unhandled exceptions.maxsize
: max size of log file, in bytes, before a new file will be created.maxFiles
: limit the number of files created when the log file size is exceeded.format
: how the log output will be formatted.Logging levels indicate message priority and are denoted by an integer. Winston uses npm
logging levels that are prioritized from 0 to 6 (highest to lowest):
When specifying a logging level for a particular transport, anything at that level or higher will be logged. For example, when setting a level of info
, anything at level error
, warn
, or info
will be logged.
Log levels are specified when calling the logger, which means you can run the following command to record an error: logger.error('test error message')
.
Still in the config file, add the following code to define the configuration settings for the file
and console
transports in the winston
configuration:
...
// define the custom settings for each transport (file, console)
const options = {
file: {
level: "info",
filename: `${appRoot}/logs/app.log`,
handleExceptions: true,
maxsize: 5242880, // 5MB
maxFiles: 5,
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
},
console: {
level: "debug",
handleExceptions: true,
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
),
},
};
Next, add the following code to instantiate a new winston
logger with file and console transports using the properties defined in the options
variable:
...
// instantiate a new Winston Logger with the settings defined above
const logger = winston.createLogger({
transports: [
new winston.transports.File(options.file),
new winston.transports.Console(options.console),
],
exitOnError: false, // do not exit on handled exceptions
});
By default, morgan
outputs to the console only, so you will define a stream function that will be able to get morgan
-generated output into the winston
log files. You will use the info
level to pick up the output by both transports (file and console). Add the following code to the config file:
...
// create a stream object with a 'write' function that will be used by `morgan`
logger.stream = {
write: function(message, encoding) {
// use the 'info' log level so the output will be picked up by both
// transports (file and console)
logger.info(message);
},
};
Finally, add the code below to export the logger so it can be used in other parts of the application:
...
module.exports = logger;
The completed winston
configuration file will now look like this:
const appRoot = require("app-root-path");
const winston = require("winston");
// define the custom settings for each transport (file, console)
const options = {
file: {
level: "info",
filename: `${appRoot}/logs/app.log`,
handleExceptions: true,
maxsize: 5242880, // 5MB
maxFiles: 5,
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
},
console: {
level: "debug",
handleExceptions: true,
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
),
},
};
// instantiate a new Winston Logger with the settings defined above
const logger = winston.createLogger({
transports: [
new winston.transports.File(options.file),
new winston.transports.Console(options.console),
],
exitOnError: false, // do not exit on handled exceptions
});
// create a stream object with a 'write' function that will be used by `morgan`
logger.stream = {
write: function (message, encoding) {
// use the 'info' log level so the output will be picked up by both
// transports (file and console)
logger.info(message);
},
};
module.exports = logger;
Save and close the file.
You now have the logger configured, but your application is still not aware of it, or how to use it, so you need to integrate the logger with the application.
To get your logger working with the application, you need to make express
aware of it. You saw in Step 2 that your express
configuration is located in app.js
, so you can import your logger into this file.
Open the file for editing:
- nano ~/myApp/app.js
Add a winston
variable declaration near the top of the file with the other require
statements:
...
var winston = require('./config/winston');
...
The first place you will use winston
is with morgan
. Still in app.js
, find the following line:
...
app.use(morgan('combined'));
...
Update it to include the stream
option:
...
app.use(morgan('combined', { stream: winston.stream }));
...
Here, you set the stream
option to the stream interface you created as part of the winston
configuration.
Save and close the file.
In this step, you configured your Express application to work with Winston. Next, you will review the log data.
Now that the application has been configured, you’re ready to see some log data. In this step, you will review the log entries and update your settings with a sample custom log message.
If you reload the page in the web browser, you should see something similar to the following output in the console of SSH Session A:
Output[nodemon] restarting due to changes...
[nodemon] starting `node bin/www`
info: ::1 - - [25/Apr/2022:18:10:55 +0000] "GET / HTTP/1.1" 200 170 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36"
info: ::1 - - [25/Apr/2022:18:10:55 +0000] "GET /stylesheets/style.css HTTP/1.1" 304 - "http://localhost:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36"
There are two log entries here: the first for the request to the HTML page; the second for the associated stylesheet. Since each transport is configured to handle info
level log data, you should also see similar information in the file transport located at ~/myApp/logs/app.log
.
To view the contents of the log file, run the following command:
- tail ~/myApp/logs/app.log
tail
will output the last parts of the file in your terminal.
You should see something similar to the following:
{"level":"info","message":"::1 - - [25/Apr/2022:18:10:55 +0000] \"GET / HTTP/1.1\" 304 - \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36\"\n","timestamp":"2022-04-25T18:10:55.573Z"}
{"level":"info","message":"::1 - - [25/Apr/2022:18:10:55 +0000] \"GET /stylesheets/style.css HTTP/1.1\" 304 - \"http://localhost:3000/\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36\"\n","timestamp":"2022-04-25T18:10:55.588Z"}
The output in the file transport will be written as a JSON object since you used winston.format.json()
in the format
option for the file transport configuration. You can learn more about JSON in An Introduction to JSON.
So far, your logger is only recording HTTP requests and related data. This information is essential to have in your logs.
In the future, you may want to record custom log messages, such as for recording errors or profiling database query performance. As an example, you will call the logger from the error handler route. By default, the express-generator
package already includes a 404
and 500
error handler route, so you will work with that.
Open the ~/myApp/app.js
file:
- nano ~/myApp/app.js
Find the code block at the bottom of the file that looks like this:
...
// error handler
app.use(function(err, req, res, next) {
// set locals, only providing error in development
res.locals.message = err.message;
res.locals.error = req.app.get('env') === 'development' ? err : {};
// render the error page
res.status(err.status || 500);
res.render('error');
});
...
This section is the final error-handling route that will ultimately send an error response back to the client. Since all server-side errors will be run through this route, it’s a good place to include the winston
logger.
Because you are now dealing with errors, you want to use the error
log level. Both transports are configured to log error
level messages, so you should see the output in the console and file logs.
You can include anything you want in the log, including information like:
err.status
: The HTTP error status code. If one is not already present, default to 500
.err.message
: Details of the error.req.originalUrl
: The URL that was requested.req.path
: The path part of the request URL.req.method
: HTTP method of the request (GET, POST, PUT, etc.).req.ip
: Remote IP address of the request.Update the error handler route to include the winston
logging:
...
// error handler
app.use(function(err, req, res, next) {
// set locals, only providing error in development
res.locals.message = err.message;
res.locals.error = req.app.get('env') === 'development' ? err : {};
// include winston logging
winston.error(
`${err.status || 500} - ${err.message} - ${req.originalUrl} - ${req.method} - ${req.ip}`
);
// render the error page
res.status(err.status || 500);
res.render('error');
});
...
Save and close the file.
To test this process, try to access a non-existent page in your project. Accessing a non-existent page will throw a 404 error. In your web browser, attempt to load the following URL: http://your_server_ip:3000/foo
. Thanks to the boilerplate created by express-generator
, the application is set up to respond to such an error.
Your browser will display an error message like this:
When you look at the console in SSH Session A, there should be a log entry for the error. Thanks to the colorize
format applied, it should be easy to spot:
Output[nodemon] starting `node bin/www`
error: 404 - Not Found - /foo - GET - ::1
info: ::1 - - [25/Apr/2022:18:08:33 +0000] "GET /foo HTTP/1.1" 404 982 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36"
info: ::1 - - [25/Apr/2022:18:08:33 +0000] "GET /stylesheets/style.css HTTP/1.1" 304 - "http://localhost:3000/foo" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36"
As for the file logger, running the tail
command again should show you the new log records:
- tail ~/myApp/logs/app.log
You will see a message like the following:
{"level":"error","message":"404 - Not Found - /foo - GET - ::1","timestamp":"2022-04-25T18:08:33.508Z"}
The error message includes all the data you specifically instructed winston
to log as part of the error handler. This information will include the error status (404 - Not Found), the requested URL (localhost/foo
), the request method (GET
), the IP address making the request, and the timestamp for when the request was made.
In this tutorial, you built a simple Node.js web application and integrated a Winston logging solution that will function as an effective tool to provide insight into the application’s performance.
You can do much more to build robust logging solutions for your applications, particularly as your needs become more complex. To learn more about Winston transports, see Winston Transports Documentation. To create your own transports, see Adding Custom Transports To create an HTTP endpoint for use with the HTTP core transport, see winstond
. To use Winston as a profiling tool, see Profiling.
Node.js is a JavaScript runtime for server-side programming. It allows developers to create scalable backend functionality using JavaScript, a language many are already familiar with from browser-based web development.
In this guide, we will show you three different ways of getting Node.js installed on an Ubuntu 22.04 server:
apt
to install the nodejs
package from Ubuntu’s default software repositoryapt
with an alternate PPA software repository to install specific versions of the nodejs
packagenvm
, the Node Version Manager, and using it to install and manage multiple versions of Node.jsFor many users, using apt
with the default repo will be sufficient. If you need specific newer (or legacy) versions of Node, you should use the PPA repository. If you are actively developing Node applications and need to switch between node
versions frequently, choose the nvm
method.
Simplify deploying Node applications with DigitalOcean App Platform. Deploy directly from GitHub in minutes.
This guide assumes that you are using Ubuntu 22.04. Before you begin, you should have a non-root user account with sudo
privileges set up on your system. You can learn how to do this by following the Ubuntu 22.04 initial server setup tutorial.
Ubuntu 22.04 contains a version of Node.js in its default repositories that can be used to provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is 12.22.9. This will not be the latest version, but it should be stable and sufficient for quick experimentation with the language.
Warning: The version of Node.js included with Ubuntu 22.04, version 12.22.9, is an LTS, or “long-term support” release. It is technically outdated, but should be supported until the release of Ubuntu 24.04.
To get this version, you can use the apt
package manager. Refresh your local package index first by typing:
- sudo apt update
Then install Node.js:
- sudo apt install nodejs
Press Y
when prompted to confirm installation. If you are prompted to restart any services, press ENTER
to accept the defaults and continue. Check that the install was successful by querying node
for its version number:
- node -v
Outputv12.22.9
If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. In most cases, you’ll also want to install npm
, the Node.js package manager. You can do this by installing the npm
package with apt
:
- sudo apt install npm
This will allow you to install modules and packages to use with Node.js.
At this point you have successfully installed Node.js and npm
using apt
and the default Ubuntu software repositories. The next section will show how to use an alternate repository to install different versions of Node.js.
To install a different version of Node.js, you can use a PPA (personal package archive) maintained by NodeSource. These PPAs have more versions of Node.js available than the official Ubuntu repositories. Node.js v14, v16, and v18 are available as of the time of writing.
First, we will install the PPA in order to get access to its packages. From your home directory, use curl
to retrieve the installation script for your preferred version, making sure to replace 18.x
with your preferred version string (if different).
- cd ~
- curl -sL https://deb.nodesource.com/setup_18.x -o nodesource_setup.sh
Refer to the NodeSource documentation for more information on the available versions.
You can inspect the contents of the downloaded script with nano
(or your preferred text editor):
- nano nodesource_setup.sh
Running third party shell scripts is not always considered a best practice, but in this case, NodeSource implements their own logic in order to ensure the correct commands are being passed to your package manager based on distro and version requirements. If you are satisfied that the script is safe to run, exit your editor, then run the script with sudo
:
- sudo bash nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. You can now install the Node.js package in the same way you did in the previous section. It may be a good idea to fully remove your older Node.js packages before installing the new version, by using sudo apt remove nodejs npm
. This will not affect your configurations at all, only the installed versions. Third party PPAs don’t always package their software in a way that works as a direct upgrade over stock packages, and if you have trouble, you can always try to revert to a clean slate.
- sudo apt install nodejs
Verify that you’ve installed the new version by running node
with the -v
version flag:
- node -v
Outputv18.7.0
The NodeSource nodejs
package contains both the node
binary and npm
, so you don’t need to install npm
separately.
At this point you have successfully installed Node.js and npm
using apt
and the NodeSource PPA. The next section will show how to use the Node Version Manager to install and manage multiple versions of Node.js.
Another way of installing Node.js that is particularly flexible is to use nvm, the Node Version Manager. This piece of software allows you to install and maintain many different independent versions of Node.js, and their associated Node packages, at the same time.
To install NVM on your Ubuntu 22.04 machine, visit the project’s GitHub page. Copy the curl
command from the README file that displays on the main page. This will get you the most recent version of the installation script.
Before piping the command through to bash
, it is always a good idea to audit the script to make sure it isn’t doing anything you don’t agree with. You can do that by removing the | bash
segment at the end of the curl
command:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh
Take a look and make sure you are comfortable with the changes it is making. When you are satisfied, run the command again with | bash
appended at the end. The URL you use will change depending on the latest version of nvm, but as of right now, the script can be downloaded and executed by typing:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
This will install the nvm
script to your user account. To use it, you must first source your .bashrc
file:
- source ~/.bashrc
Now, you can ask NVM which versions of Node are available:
- nvm list-remote
Output. . .
v16.11.1
v16.12.0
v16.13.0 (LTS: Gallium)
v16.13.1 (LTS: Gallium)
v16.13.2 (LTS: Gallium)
v16.14.0 (Latest LTS: Gallium)
v17.0.0
v17.0.1
v17.1.0
v17.2.0
v17.3.0
v17.3.1
v17.4.0
v17.5.0
v17.6.0
It’s a very long list! You can install a version of Node by typing any of the release versions you see. For instance, to get version v16.14.0 (another LTS release), you can type:
- nvm install v16.14.0
You can see the different versions you have installed by typing:
nvm list
Output-> v16.14.0
default -> v16.14.0
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v16.14.0) (default)
stable -> 16.14 (-> v16.14.0) (default)
lts/* -> lts/gallium (-> v16.14.0)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.10 (-> N/A)
lts/fermium -> v14.19.0 (-> N/A)
lts/gallium -> v16.14.0
This shows the currently active version on the first line (-> v16.14.0
), followed by some named aliases and the versions that those aliases point to.
Note: if you also have a version of Node.js installed through apt
, you may see a system
entry here. You can always activate the system-installed version of Node using nvm use system
.
You can install a release based on these aliases as well. For instance, to install fermium
, run the following:
- nvm install lts/fermium
OutputDownloading and installing node v14.19.0...
Downloading https://nodejs.org/dist/v14.19.0/node-v14.19.0-linux-x64.tar.xz...
################################################################################# 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v14.19.0 (npm v6.14.16)
You can verify that the install was successful using the same technique from the other sections, by typing:
- node -v
Outputv14.19.0
The correct version of Node is installed on our machine as we expected. A compatible version of npm
is also available.
You can uninstall Node.js using apt
or nvm
, depending on how it was installed. To remove the version from the system repositories, use apt remove
:
- sudo apt remove nodejs
By default, apt remove
retains any local configuration files that were created since install. If you don’t want to save the configuration files for later use, use apt purge
:
- sudo apt purge nodejs
To uninstall a version of Node.js that you installed using nvm
, first determine whether it is the current active version:
- nvm current
If the version you are targeting is not the current active version, you can run:
- nvm uninstall node_version
OutputUninstalled node node_version
This command will uninstall the selected version of Node.js.
If the version you would like to remove is the current active version, you’ll first need to deactivate nvm
to enable your changes:
- nvm deactivate
Now you can uninstall the current version using the uninstall
command used previously. This removes all files associated with the targeted version of Node.js.
There are quite a few ways to get up and running with Node.js on your Ubuntu 22.04 server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in Ubuntu’s repository is the easiest method, using nvm
or a NodeSource PPA offers additional flexibility.
For more information on programming with Node.js, please refer to our tutorial series How To Code in Node.js.
]]>Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, macOS, FreeBSD, and Windows. Though you can run Node.js applications at the command line, this tutorial will focus on running them as a service. This means that they will restart on reboot or failure and are safe for use in a production environment.
In this tutorial, you will set up a production-ready Node.js environment on a single Ubuntu 22.04 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS using a free certificate provided by Let’s Encrypt.
This guide assumes that you have the following:
When you’ve completed the prerequisites, you will have a server serving your domain’s default placeholder page at https://example.com/
.
Let’s write a Hello World application that returns “Hello World” to any HTTP requests. This sample application will help you get up and running with Node.js. You can replace it with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.
First, using nano
or your favorite text editor, create a sample application called hello.js
:
- nano hello.js
Insert the following code into the file:
const http = require('http');
const hostname = 'localhost';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
Save the file and exit the editor. If you are using nano
, press Ctrl+X
, then when prompted, Y
and then Enter.
This Node.js application listens on the specified address (localhost
) and port (3000
), and returns “Hello World!” with a 200
HTTP success code. Since we’re listening on localhost
, remote clients won’t be able to connect to our application.
To test your application, type:
- node hello.js
You will receive the following output:
OutputServer running at http://localhost:3000/
Note: Running a Node.js application in this manner will block additional commands until the application is killed by pressing CTRL+C
.
To test the application, open another terminal session on your server, and connect to localhost
with curl
:
- curl http://localhost:3000
If you get the following output, the application is working properly and listening on the correct address and port:
OutputHello World!
If you do not get the expected output, make sure that your Node.js application is running and configured to listen on the proper address and port.
Once you’re sure it’s working, kill the application (if you haven’t already) by pressing CTRL+C
.
Next let’s install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.
Use npm
to install the latest version of PM2 on your server:
- sudo npm install pm2@latest -g
The -g
option tells npm
to install the module globally, so that it’s available system-wide.
Let’s first use the pm2 start
command to run your application, hello.js
, in the background:
- pm2 start hello.js
This also adds your application to PM2’s process list, which is outputted every time you start an application:
Output...
[PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/sammy/hello.js in fork_mode (1 instance)
[PM2] Done.
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ hello │ fork │ 0 │ online │ 0% │ 25.2mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
As indicated above, PM2 automatically assigns an App name
(based on the filename, without the .js
extension) and a PM2 id
. PM2 also maintains other information, such as the PID
of the process, its current status, and memory usage.
Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but we can take an additional step to get the application to launch on system startup using the startup
subcommand. This subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots:
- pm2 startup systemd
The last line of the resulting output will include a command to run with superuser privileges in order to set PM2 to start on boot:
Output[PM2] Init System found: systemd
sammy
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Run the command from the output, with your username in place of sammy
:
- sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
As an additional step, we can save the PM2 process list and corresponding environments:
- pm2 save
You have now created a systemd unit that runs pm2
for your user on boot. This pm2
instance, in turn, runs hello.js
.
Start the service with systemctl
:
- sudo systemctl start pm2-sammy
Check the status of the systemd unit:
- systemctl status pm2-sammy
For a detailed overview of systemd, please review Systemd Essentials: Working with Services, Units, and the Journal.
In addition to those we have covered, PM2 provides many subcommands that allow you to manage or look up information about your applications.
Stop an application with this command (specify the PM2 App name
or id
):
- pm2 stop app_name_or_id
Restart an application:
- pm2 restart app_name_or_id
List the applications currently managed by PM2:
- pm2 list
Get information about a specific application using its App name
:
- pm2 info app_name
The PM2 process monitor can be pulled up with the monit
subcommand. This displays the application status, CPU, and memory usage:
- pm2 monit
Note that running pm2
without any arguments will also display a help page with example usage.
Now that your Node.js application is running and managed by PM2, let’s set up the reverse proxy.
Your application is running and listening on localhost
, but you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.
In the prerequisite tutorial, you set up your Nginx configuration in the /etc/nginx/sites-available/example.com
file. Open this file for editing:
- sudo nano /etc/nginx/sites-available/example.com
Within the server
block, you should have an existing location /
block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number:
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
This configures the server to respond to requests at its root. Assuming our server is available at example.com
, accessing https://example.com/
via a web browser would send the request to hello.js
, listening on port 3000
at localhost
.
You can add additional location
blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 3001
, you could add this location block to allow access to it via https://example.com/app2
:
server {
...
location /app2 {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
Once you are done adding the location blocks for your applications, save the file and exit your editor.
Make sure you didn’t introduce any syntax errors by typing:
- sudo nginx -t
Restart Nginx:
- sudo systemctl restart nginx
Assuming that your Node.js application is running, and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your server’s URL (its public IP address or domain name).
Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on an Ubuntu 22.04 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share.
Next, you may want to look into How to build a Node.js application with Docker.
]]>A CSV is a plain text file format for storing tabular data. The CSV file uses a comma delimiter to separate values in table cells, and a new line delineates where rows begin and end. Most spreadsheet programs and databases can export and import CSV files. Because CSV is a plain-text file, any programming language can parse and write to a CSV file. Node.js has many modules that can work with CSV files, such as node-csv
, fast-csv
, and papaparse
.
In this tutorial, you will use the node-csv
module to read a CSV file using Node.js streams, which lets you read large datasets without consuming a lot of memory. You will modify the program to move data parsed from the CSV file into a SQLite database. You will also retrieve data from the database, parse it with node-csv
, and use Node.js streams to write it to a CSV file in chunks.
Deploy your Node applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To follow this tutorial, you will need:
Node.js installed on your local or server environment. Follow How to Install Node.js and Create a Local Development Environment to install Node.js.
SQLite installed on your local or server environment, which you can install by following step 1 in How To Install and Use SQLite on Ubuntu 20.04. Knowledge on how to use SQLite is helpful and can be learned in steps 2-7 of the installation guide.
Familiarity with writing a Node.js program. See How To Write and Run Your First Program in Node.js.
Familiarity with Node.js streams. See How To Work with Files Using Streams in Node.js.
In this section, you will create the project directory and download packages for your application. You will also download a CSV dataset from Stats NZ, which contains international migration data in New Zealand.
To get started, make a directory called csv_demo
and navigate into the directory:
- mkdir csv_demo
- cd csv_demo
Next, initialize the directory as an npm project using the npm init
command:
- npm init -y
The -y
option notifies npm init
to say “yes” to all the prompts. This command creates a package.json
with default values that you can change anytime.
With the directory initialized as an npm project, you can now install the necessary dependencies: node-csv
and node-sqlite3
.
Enter the following command to install node-csv
:
- npm install csv
The node-csv
module is a collection of modules that allows you to parse and write data to a CSV file. The command installs all four modules that are part of the node-csv
package: csv-generate
, csv-parse
, csv-stringify
, and stream-transform
. You will use the csv-parse
module to parse a CSV file and the csv-stringify
module to write data to a CSV file.
Next, install the node-sqlite3
module:
- npm install sqlite3
The node-sqlite3
module allows your app to interact with the SQLite database.
After installing the packages in your project, download the New Zealand migration CSV file with the wget
command:
- wget https://www.stats.govt.nz/assets/Uploads/International-migration/International-migration-September-2021-Infoshare-tables/Download-data/international-migration-September-2021-estimated-migration-by-age-and-sex-csv.csv
The CSV file you downloaded has a long name. To make it easier to work with, rename the file name to a shorter name using the mv
command:
- mv international-migration-September-2021-estimated-migration-by-age-and-sex-csv.csv migration_data.csv
The new CSV filename, migration_data.csv
, is shorter and easier to work with.
Using nano
, or your favorite text editor, open the file:
- nano migration_data.csv
Once open, you will see contents similar to this:
year_month,month_of_release,passenger_type,direction,sex,age,estimate,standard_error,status
2001-01,2020-09,Long-term migrant,Arrivals,Female,0-4 years,344,0,Final
2001-01,2020-09,Long-term migrant,Arrivals,Male,0-4 years,341,0,Final
...
The first line contains the column names, and all subsequent lines have the data corresponding to each column. A comma separates each piece of data. This character is known as a delimiter because it delineates the fields. You are not limited to using commas. Other popular delimiters include colons(:
), semicolons(;
), and tabs(\td
). You need to know which delimiter is used in the file since most modules require it to parse the files.
After reviewing the file and identifying the delimiter, exit your migration_data.csv
file using CTRL+X
.
You have now installed the necessary dependencies for your project. In the next section, you will read a CSV file.
In this section, you will use node-csv
to read a CSV file and log its content in the console. You will use the fs
module’s createReadStream()
method to read the data from the CSV file and create a readable stream. You will then pipe the stream to another stream initialized with the csv-parse
module to parse the chunks of data. Once the chunks of data have been parsed, you can log them in the console.
Create and open a readCSV.js
file in your preferred editor:
- nano readCSV.js
In your readCSV.js
file, import the fs
and csv-parse
modules by adding the following lines:
const fs = require("fs");
const { parse } = require("csv-parse");
In the first line, you define the fs
variable and assign it the fs
object that the Node.js require()
method returns when it imports the module.
In the second line, you extract the parse
method from the object returned by the require()
method into the parse
variable using the destructuring syntax.
Add the following lines to read the CSV file:
...
fs.createReadStream("./migration_data.csv")
.pipe(parse({ delimiter: ",", from_line: 2 }))
.on("data", function (row) {
console.log(row);
})
The createReadStream()
method from the fs
module accepts an argument of the filename you want to read, which is migration_data.csv
here. Then, it creates a readable stream, which takes a large file and breaks it into smaller chunks. A readable stream allows you to only read data from it and not write to it.
After creating the readable stream, Node’s pipe()
method forwards chunks of data from the readable stream to another stream. The second stream is created when the csv-parse
module’s parse()
method is invoked inside the pipe()
method. The csv-parse
module implements a transform stream (a readable and writable stream), taking a data chunk and transforming it to another form. For example, when it receives a chunk like 2001-01,2020-09,Long-term migrant,Arrivals,Female,0-4 years,344
, the parse()
method will transform it into an array.
The parse()
method takes an object that accepts properties. The object then configures and provides more information about the data the method will parse. The object takes the following properties:
delimiter
defines the character that separates each field in the row. The value ,
tells the parser that commas demarcate the fields.
from_line
defines the line where the parser should start parsing the rows. With the value 2
, the parser will skip line 1 and start at line 2. Because you will insert the data in the database later, this property helps you avoid inserting the column names in the first row of the database.
Next, you attach a streaming event using the Node.js on()
method. A streaming event allows the method to consume a chunk of data if a certain event is emitted. The data
event is triggered when data transformed from the parse()
method is ready to be consumed. To access the data, you pass a callback to the on()
method, which takes a parameter named row
. The row
parameter is a data chunk transformed into an array. Within the callback, you log the data in the console using the console.log()
method.
Before running the file, you will add more stream events. These stream events handle errors and write a success message to the console when all the data in the CSV file has been consumed.
Still in your readCSV.js
file, add the highlighted code:
...
fs.createReadStream("./migration_data.csv")
.pipe(parse({ delimiter: ",", from_line: 2 }))
.on("data", function (row) {
console.log(row);
})
.on("end", function () {
console.log("finished");
})
.on("error", function (error) {
console.log(error.message);
});
The end
event is emitted when all the data in the CSV file has been read. When that happens, the callback is invoked and logs a message that says it has finished.
If an error occurs anywhere while reading and parsing the CSV data, the error
event is emitted, which invokes the callback and logs the error message in the console.
Your complete file should now look like the following:
const fs = require("fs");
const { parse } = require("csv-parse");
fs.createReadStream("./migration_data.csv")
.pipe(parse({ delimiter: ",", from_line: 2 }))
.on("data", function (row) {
console.log(row);
})
.on("end", function () {
console.log("finished");
})
.on("error", function (error) {
console.log(error.message);
});
Save and exit out of your readCSV.js
file using CTRL+X
.
Next, run the file using the node
command:
- node readCSV.js
The output will look similar to this (edited for brevity):
Output[
'2001-01',
'2020-09',
'Long-term migrant',
'Arrivals',
'Female',
'0-4 years',
'344',
'0',
'Final'
]
...
[
'2021-09',
...
'70',
'Provisional'
]
finished
All the rows in the CSV file have been transformed into arrays using the csv-parse
transform stream. Because logging happens each time a chunk is received from the stream, the data appears as though it is being downloaded rather than being displayed all at once.
In this step, you read data in a CSV file and transformed it into arrays. Next, you will insert data from a CSV file into the database.
Inserting data from a CSV file into the database using Node.js gives you access to a vast library of modules that you can use to process, clean, or enhance the data before inserting it into the database.
In this section, you will establish a connection with the SQLite database using the node-sqlite3
module. You will then create a table in the database, copy the readCSV.js
file, and modify it to insert all the data read from the CSV file into the database.
Create and open a db.js
file in your editor:
- nano db.js
In your db.js
file, add the following lines to import the fs
and node-sqlite3
modules:
const fs = require("fs");
const sqlite3 = require("sqlite3").verbose();
const filepath = "./population.db";
...
In the third line, you define the path of the SQLite database and store it in the variable filepath
. The database file doesn’t exist yet, but it will be needed for node-sqlite3
to establish a connection with the database.
In the same file, add the following lines to connect Node.js to a SQLite database:
...
function connectToDatabase() {
if (fs.existsSync(filepath)) {
return new sqlite3.Database(filepath);
} else {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
console.log("Connected to the database successfully");
});
return db;
}
}
Here, you define a function named connectToDatabase()
to establish a connection to the database. Within the function, you invoke the fs
module’s existsSync()
method in an if
statement, which checks if the database file exists in the project directory. If the if
condition evaluates to true
, you instantiate the SQLite’s Database()
class of the node-sqlite3
module with the database filepath. Once the connection is established, the function returns the connection object and exits.
However, if the if
statement evaluates to false
(if the database file doesn’t exist), execution will skip to the else
block. In the else
block, you instantiate the Database()
class with two arguments: the database file path and a callback.
The first argument is the path of the SQLite database file, which is ./population.db
. The second argument is a callback that will be invoked automatically when the connection with the database has been established successfully or if an error occurred. The callback takes an error
object as a parameter, which is null
if the connection is successful. Within the callback, the if
statement checks if the error
object is set. If it evaluates to true
, the callback logs an error message and returns. If it evaluates to false
, you log a success message confirming that the connection has been established.
Currently, the if
and else
blocks establish the connection object. You pass a callback when invoking the Database
class in the else
block to create a table in the database, but only if the database file does not exist. If the database file already exists, the function will execute the if
block, connect with the database, and return the connection object.
To create a table if the database file doesn’t exist, add the highlighted code:
const fs = require("fs");
const sqlite3 = require("sqlite3").verbose();
const filepath = "./population.db";
function connectToDatabase() {
if (fs.existsSync(filepath)) {
return new sqlite3.Database(filepath);
} else {
const db = new sqlite3.Database(filepath, (error) => {
if (error) {
return console.error(error.message);
}
createTable(db);
console.log("Connected to the database successfully");
});
return db;
}
}
function createTable(db) {
db.exec(`
CREATE TABLE migration
(
year_month VARCHAR(10),
month_of_release VARCHAR(10),
passenger_type VARCHAR(50),
direction VARCHAR(20),
sex VARCHAR(10),
age VARCHAR(50),
estimate INT
)
`);
}
module.exports = connectToDatabase();
Now the connectToDatabase()
invokes the createTable()
function, which accepts the connection object stored in the db
variable as an argument.
Outside the connectToDatabase()
function, you define the createTable()
function, which accepts the connection object db
as a parameter. You invoke the exec()
method on the db
connection object that takes a SQL statement as an argument. The SQL statement creates a table named migration
with 7 columns. The column names match the headings in the migration_data.csv
file.
Finally, you invoke the connectToDatabase()
function and export the connection object returned by the function so that it can be reused in other files.
Save and exit your db.js
file.
With the database connection established, you will now copy and modify the readCSV.js
file to insert the rows that the csv-parse
module parsed into the database.
Copy and rename the file to insertData.js
with the following command:
- cp readCSV.js insertData.js
Open the insertData.js
file in your editor:
- nano insertData.js
Add the highlighted code:
const fs = require("fs");
const { parse } = require("csv-parse");
const db = require("./db");
fs.createReadStream("./migration_data.csv")
.pipe(parse({ delimiter: ",", from_line: 2 }))
.on("data", function (row) {
db.serialize(function () {
db.run(
`INSERT INTO migration VALUES (?, ?, ? , ?, ?, ?, ?)`,
[row[0], row[1], row[2], row[3], row[4], row[5], row[6]],
function (error) {
if (error) {
return console.log(error.message);
}
console.log(`Inserted a row with the id: ${this.lastID}`);
}
);
});
});
In the third line, you import the connection object from the db.js
file and store it in the variable db
.
Inside the data
event callback attached to the fs
module stream, you invoke the serialize()
method on the connection object. The method ensures that a SQL statement finishes executing before another one starts executing, which can help prevent database race conditions where the system runs competing operations simultaneously.
The serialize()
method takes a callback. Within the callback, you invoke the run
method on the db
connection object. The method accepts three arguments:
The first argument is a SQL statement that will be passed and executed in the SQLite database. The run()
method only accepts SQL statements that don’t return results. The INSERT INTO migration VALUES (?, ..., ?
statement inserts a row in the table migration
, and the ?
are placeholders that are later substituted with the values in the run()
method second argument.
The second argument is an array [row[0], ... row[5], row[6]]
. In the previous section, the parse()
method receives a chunk of data from the readable stream and transforms it into an array. Since the data is received as an array, to get each field value, you must use array indexes to access them like [row[1], ..., row[6]]
, etc.
The third argument is a callback that runs when the data has been inserted or if an error occurred. The callback checks if an error occurred and logs the error message. If there are no errors, the function logs a success message in the console using the console.log()
method, letting you know that a row has been inserted along with the id.
Finally, remove the end
and error
events from your file. Due to the asynchronous nature of the node-sqlite3
methods, the end
and error
events execute before the data is inserted into the database, so they are no longer required.
Save and exit your file.
Run the insertData.js
file using node
:
- node insertData.js
Depending on your system, it may take some time, but node
should return the output below:
OutputConnected to the database successfully
Inserted a row with the id: 1
Inserted a row with the id: 2
...
Inserted a row with the id: 44308
Inserted a row with the id: 44309
Inserted a row with the id: 44310
The message, especially the ids, proves that the row from the CSV file has been saved into the database.
You can now read a CSV file and insert its content into the database. Next, you will write a CSV file.
In this section, you will retrieve data from the database and write it into a CSV file using streams.
Create and open writeCSV.js
in your editor:
- nano writeCSV.js
In your writeCSV.js
file, add the following lines to import the fs
and csv-stringify
modules and the database connection object from db.js
:
const fs = require("fs");
const { stringify } = require("csv-stringify");
const db = require("./db");
The csv-stringify
module transforms data from an object or array into a CSV text format.
Next, add the following lines to define a variable that contains the name of the CSV file you want to write data to and a writable stream that you will write data to:
...
const filename = "saved_from_db.csv";
const writableStream = fs.createWriteStream(filename);
const columns = [
"year_month",
"month_of_release",
"passenger_type",
"direction",
"sex",
"age",
"estimate",
];
The createWriteStream
method takes an argument of the filename you want to write your stream of data to, which is the saved_from_db.csv
file name stored in the filename
variable.
In the fourth line, you define a columns
variable, which stores an array containing the names of the headers for the CSV data. These headers will be written in the first line of the CSV file when you start writing the data to the file.
Still in your writeCSV.js
file, add the following lines to retrieve data from the database and write each row in the CSV file:
...
const stringifier = stringify({ header: true, columns: columns });
db.each(`select * from migration`, (error, row) => {
if (error) {
return console.log(error.message);
}
stringifier.write(row);
});
stringifier.pipe(writableStream);
console.log("Finished writing data");
First, you invoke the stringify
method with an object as an argument, which creates a transform stream. The transform stream converts the data from an object into CSV text. The object passed into the stringify()
method has two properties:
header
accepts a boolean value and generates a header if the boolean value is set to true
.columns
takes an array containing the names of the columns that will be written in the first line of the CSV file if the header
option is set to true
.Next, you invoke the each()
method from the db
connection object with two arguments. The first argument is the SQL statement select * from migration
that retrieves the rows one by one in the database. The second argument is a callback invoked each time a row is retrieved from the database. The callback takes two parameters: an error
object and a row
object containing data retrieved from a single row in the database. Within the callback, you check if the error
object is set in the if
statement. If the condition evaluates to true
, an error message is logged in the console using the console.log()
method. If there is no error, you invoke the write()
method on stringifier
, which writes the data into the stringifier
transform stream.
When the each()
method finishes iterating, the pipe()
method on the stringifier
stream starts sending data in chunks and writing it in the writableStream
. The writable stream will save each chunk of data in the saved_from_db.csv
file. Once all the data has been written to the file, console.log()
will log a success message.
The complete file will now look like the following:
const fs = require("fs");
const { stringify } = require("csv-stringify");
const db = require("./db");
const filename = "saved_from_db.csv";
const writableStream = fs.createWriteStream(filename);
const columns = [
"year_month",
"month_of_release",
"passenger_type",
"direction",
"sex",
"age",
"estimate",
];
const stringifier = stringify({ header: true, columns: columns });
db.each(`select * from migration`, (error, row) => {
if (error) {
return console.log(error.message);
}
stringifier.write(row);
});
stringifier.pipe(writableStream);
console.log("Finished writing data");
Save and close your file, then run the writeCSV.js
file in the terminal:
- node writeCSV.js
You will receive the following output:
OutputFinished writing data
To confirm that the data has been written, inspect the contents in the file using the cat
command:
- cat saved_from_db.csv
cat
will return all the rows written in the file (edited for brevity):
Outputyear_month,month_of_release,passenger_type,direction,sex,age,estimate
2001-01,2020-09,Long-term migrant,Arrivals,Female,0-4 years,344
2001-01,2020-09,Long-term migrant,Arrivals,Male,0-4 years,341
2001-01,2020-09,Long-term migrant,Arrivals,Female,10-14 years,
...
You can now retrieve data from the database and write each row in a CSV file using streams.
In this article, you read a CSV file and inserted its data into a database using the node-csv
and node-sqlite3
modules. You then retrieved data from the database and wrote it to another CSV file.
You can now read and write CSV files. As a next step, you can now work with large CSV datasets using the same implementation with memory-efficient streams, or you might look into a package like event-stream
that make working with streams much easier.
To explore more about node-csv
, visit their documentation CSV Project - Node.js CSV package. To learn more about node-sqlite3
, visit their Github documentation. To continue growing your Node.js skills, see the How To Code in Node.js series.
Node.js is a JavaScript runtime for server-side programming. It allows developers to create scalable backend functionality using JavaScript, a language many are already familiar with from browser-based web development.
In this guide, you will learn how to install Node.js on a Debian 10 server three different ways:
For many users, using apt
with the default repository will be sufficient. If you need specific newer (or legacy) versions of Node, you should use the PPA repository. If you are actively developing Node applications and need to switch between versions frequently, choose the NVM method.
Before you begin, you should have a non-root user with sudo privileges set up on your system. You can learn how to set this up by following the initial server setup for Debian 10 tutorial.
Debian contains a version of Node.js in its default repositories that can be used to provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is 10.24.0. This will not be the latest version, but it should be stable and sufficient for quick experimentation with the language.
Warning: The version of Node.js included with Debian 10, version 10.24.0, is unsupported and unmaintained. You should not use this version in production, and should refer to one of the other sections in this tutorial to install a more recent version of Node.
To get Node.js from the default Debian software repository, you can use the apt
package manager. First, refresh your local package index:
- sudo apt update
Then install the Node.js package:
- sudo apt install nodejs
To verify that the installation was successful, run the node
command with the -v
flag to get the version:
- node -v
Outputv10.24.0
If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. In most cases, you’ll also want to also install npm
, the Node.js package manager. You can do this by installing the npm
package with apt
:
- sudo apt install npm
This will allow you to install modules and packages to use with Node.js.
At this point you have successfully installed Node.js and npm
using apt
and the default Ubuntu software repositories. The next section will show how to use an alternate repository to install different versions of Node.js.
To work with a more recent version of Node.js, you can install from a PPA (personal package archive) maintained by NodeSource. This is an alternate repository that still works with apt
, and will have more up-to-date versions of Node.js than the official Debian repositories. NodeSource has PPAs available for multiple Node versions. Refer to the NodeSource documentation for more information on the available versions.
From your home directory, use curl
to retrieve the installation script for your preferred Node.js version. If you do not have curl
installed, you can install it before proceeding to the next step with this command:
- sudo apt install curl
With curl
installed, you can begin your Node.js installation. This example installs version 16.x
. You can replace 16.x
with your preferred version.
- curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
- sudo apt install nodejs
You don’t need to install a separate package for npm
in this case, as it is included in the nodejs
package.
Verify the installation by running node
with the -v
version option:
- node -v
Outputv16.14.2
npm
uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm
. Execute this command to verify that npm
is installed:
- npm -v
Output8.5.0
An alternative to installing Node.js through apt
is to use a tool called nvm
, which stands for “Node Version Manager”. Rather than working at the operating system level, nvm
works at the level of an independent directory within your user’s home directory. This means that you can install multiple self-contained versions of Node.js without affecting the entire system.
Controlling your environment with nvm
allows you to access the newest versions of Node.js while also retaining and managing previous releases. It is a different utility from apt
, however, and the versions of Node.js that you manage with it are distinct from those you manage with apt
.
To install nvm
on Debian 10, follow the installation instructions on the README file from the NVM Github repository.
The URL may change depending on the latest version of nvm
, but as of this writing, the script can be downloaded and executed by typing:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
This will install the nvm
script to your user account. To use it, you must first source your .bashrc
file. This allows you to execute the code:
- source ~/.bashrc
Now, you can list the available Node versions using nvm
:
- nvm list-remote
This command will produce a long output:
Output...
v15.11.0
v15.12.0
v15.13.0
v15.14.0
v16.0.0
v16.1.0
v16.2.0
v16.3.0
v16.4.0
v16.4.1
v16.4.2
v16.5.0
v16.6.0
v16.6.1
v16.6.2
v16.7.0
v16.8.0
v16.9.0
v16.9.1
v16.10.0
v16.11.0
v16.11.1
v16.12.0
v16.13.0 (LTS: Gallium)
v16.13.1 (LTS: Gallium)
v16.13.2 (LTS: Gallium)
v16.14.0 (LTS: Gallium)
v16.14.1 (LTS: Gallium)
v16.14.2 (Latest LTS: Gallium)
v17.0.0
v17.0.1
...
You can install a version of Node by typing any of the release versions you see. For example, to install version v14.10.0, you can type:
- nvm install v14.10.0
You can view the different versions you have installed by typing:
- nvm ls
Output-> v14.10.0
system
default -> v14.10.0
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v14.10.0) (default)
stable -> 14.10 (-> v14.10.0) (default)
...
This shows the currently active version on the first line (-> v14.10.0
), followed by some named aliases and the versions that those aliases point to.
Note: If you also have a version of Node.js installed through apt
, you may see a system
entry here. You can activate the system installed version of Node using nvm use system
.
Additionally, this output lists aliases for the various long-term support (LTS) releases of Node:
Output. . .
lts/* -> lts/fermium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.11 (-> N/A)
lts/fermium -> v14.19.1 (-> N/A)
lts/gallium -> v16.14.2 (-> N/A)
We can install a release based on these aliases as well. For instance, to install the latest long-term support version, gallium
, run the following:
- nvm install lts/gallium
OutputDownloading and installing node v16.14.2...
...
Now using node v16.14.2 (npm v8.5.0)
You can verify that the installation was successful by typing:
- node -v
Outputv16.14.2
If you wish to use a particular Node version as a default, type the following with the version of your choosing:
- nvm alias default 14.10.0
This version will be automatically selected when you start a new session in Node. You can also reference it by the alias like this:
- nvm use default
OutputNow using node v14.10.0 (npm v6.14.8)
Each version of Node.js will keep track of its own packages and has npm
available to manage these.
You can uninstall Node.js using apt
or nvm
, depending on the version you want to target. To remove versions installed from the Debian repository or from the PPA, you will need to work with the apt
utility at the system level.
To remove either of these versions, type the following:
- sudo apt remove nodejs
This command will remove the package and the configuration files.
To uninstall a version of Node.js that you have enabled using nvm
, first determine whether or not the version you would like to remove is the current active version:
- nvm current
Outputv16.14.2
If the version you are targeting is not the current active version, you can run this command with the version you want to remove:
- nvm uninstall node_version_to_remove
This command will uninstall the selected version of Node.js.
If the version you would like to remove is the current active version, you must first deactivate nvm
to enable your changes:
- nvm deactivate
You can now uninstall the current version using the nvm uninstall
command with your current version of Node.js.
- nvm uninstall current_node_version
This will remove all files associated with the targeted version of Node.js except the cached files that can be used for reinstallation.
There are quite a few ways to get up and running with Node.js on your Debian 10 server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in the Debian repository is an option for experimentation, installing from a PPA and working with a NodeSource PPA or nvm
offers additional flexibility.
For more information on programming with Node.js, please refer to our tutorial series How To Code in Node.js.
]]>The Node.js Read-Eval-Print-Loop (REPL) is an interactive shell that processes Node.js expressions. The shell reads JavaScript code the user enters, evaluates the result of interpreting the line of code, prints the result to the user, and loops until the user signals to quit.
The REPL is bundled with every Node.js installation and allows you to quickly test and explore JavaScript code within the Node environment without having to store it in a file.
To complete this tutorial, you will need:
If you have node
installed, then you also have the Node.js REPL. To start it, simply enter node
in your command line shell:
- node
This results in the REPL prompt:
The >
symbol lets you know that you can enter JavaScript code to be immediately evaluated.
For an example, try adding two numbers in the REPL by typing this:
- 2 + 2
When you press ENTER
, the REPL will evaluate the expression and return:
4
To exit the REPL, you can type .exit
, or press CTRL+D
once, or press CTRL+C
twice, which will return you to the shell prompt.
With starting and stopping out of the way, let’s take a look at how you can use the REPL to execute simple JavaScript code.
The REPL is a quick way to test JavaScript code without having to create a file. Almost every valid JavaScript or Node.js expression can be executed in the REPL.
In the previous step you already tried out addition of two numbers, now let’s try division. To do so, start a new REPL:
- node
In the prompt type:
- 10 / 5
Press ENTER
, and the output will be 2
, as expected:
2
The REPL can also process operations on strings. Concatenate the following strings in your REPL by typing:
- "Hello " + "World"
Again, press ENTER
, and the string expression is evaluated:
'Hello World'
Note: You may have noticed that the output used single quotes instead of double quotes. In JavaScript, the quotes used for a string do not affect its value. If the string you entered used a single quote, the REPL is smart enough to use double quotes in the output.
When writing Node.js code, it’s common to print messages via the global console.log
method or a similar function. Type the following at the prompt:
- console.log("Hi")
Pressing ENTER
yields the following output:
Hi
undefined
The first result is the output from console.log
, which prints a message to the stdout
stream (the screen). Because console.log
prints a string instead of returning a string, the message is seen without quotes. The undefined
is the return value of the function.
Rarely do you just work with literals in JavaScript. Creating a variable in the REPL works in the same fashion as working with .js
files. Type the following at the prompt:
- let age = 30
Pressing ENTER
results in:
undefined
Like before, with console.log
, the return value of this command is undefined
. The age
variable will be available until you exit the REPL session. For example, you can multiply age
by two. Type the following at the prompt and press ENTER
:
- age * 2
The result is:
60
Because the REPL returns values, you don’t need to use console.log
or similar functions to see the output on the screen. By default, any returned value will appear on the screen.
Multi-line blocks of code are supported as well. For example, you can create a function that adds 3 to a given number. Start the function by typing the following:
- const add3 = (num) => {
Then, pressing ENTER
will change the prompt to:
...
The REPL noticed an open curly bracket and therefore assumes you’re writing more than one line of code, which needs to be indented. To make it easier to read, the REPL adds 3 dots and a space on the next line, so the following code appears to be indented.
Enter the second and third lines of the function, one at a time, pressing ENTER
after each:
return num + 3;
}
Pressing ENTER
after the closing curly bracket will display an undefined
, which is the “return value” of the function assignment to a variable. The ...
prompt is now gone and the >
prompt returns:
undefined
>
Now, call add3()
on a value:
- add3(10)
As expected, the output is:
13
You can use the REPL to try out bits of JavaScript code before including them into your programs. The REPL also includes some handy shortcuts to make that process easier.
The REPL provides shortcuts to decrease coding time when possible. It keeps a history of all the entered commands and allows us to cycle through them and repeat a command if necessary.
For an example, enter the following string:
- "The answer to life the universe and everything is 32"
This results in:
'The answer to life the universe and everything is 32'
If we’d like to edit the string and change the “32” to “42”, at the prompt, use the UP
arrow key to return to the previous command:
> "The answer to life the universe and everything is 32"
Move the cursor to the left, delete 3
, enter 4
, and press ENTER
again:
'The answer to life the universe and everything is 42'
Continue to press the UP
arrow key, and you’ll go further back through your history until the first used command in the current REPL session. In contrast, pressing DOWN
will iterate towards the more recent commands in the history.
When you are done maneuvering through your command history, press DOWN
repeatedly until you have exhausted your recent command history and are once again seeing the prompt.
To quickly get the last evaluated value, use the underscore character. At the prompt, type _
and press ENTER
:
- _
The previously entered string will appear again:
'The answer to life the universe and everything is 42'
The REPL also has an autocompletion for functions, variables, and keywords. If you wanted to find the square root of a number using the Math.sqrt
function, enter the first few letters, like so:
- Math.sq
Then press the TAB
key and the REPL will autocomplete the function:
> Math.sqrt
When there are multiple possibilities for autocompletion, you’re prompted with all the available options. For an example, enter just:
- Math.
And press TAB
twice. You’re greeted with the possible autocompletions:
> Math.
Math.__defineGetter__ Math.__defineSetter__ Math.__lookupGetter__
Math.__lookupSetter__ Math.__proto__ Math.constructor
Math.hasOwnProperty Math.isPrototypeOf Math.propertyIsEnumerable
Math.toLocaleString Math.toString Math.valueOf
Math.E Math.LN10 Math.LN2
Math.LOG10E Math.LOG2E Math.PI
Math.SQRT1_2 Math.SQRT2 Math.abs
Math.acos Math.acosh Math.asin
Math.asinh Math.atan Math.atan2
Math.atanh Math.cbrt Math.ceil
Math.clz32 Math.cos Math.cosh
Math.exp Math.expm1 Math.floor
Math.fround Math.hypot Math.imul
Math.log Math.log10 Math.log1p
Math.log2 Math.max Math.min
Math.pow Math.random Math.round
Math.sign Math.sin Math.sinh
Math.sqrt Math.tan Math.tanh
Math.trunc
Depending on the screen size of your shell, the output may be displayed with a different number of rows and columns. This is a list of all the functions and properties that are available in the Math
module.
Press CTRL+C
to get to a new line in the prompt without executing what is in the current line.
Knowing the REPL shortcuts makes you more efficient when using it. Though, there’s another thing REPL provides for increased productivity—The REPL commands.
The REPL has specific keywords to help control its behavior. Each command begins with a dot .
.
To list all the available commands, use the .help
command:
- .help
There aren’t many, but they’re useful for getting things done in the REPL:
.break Sometimes you get stuck, this gets you out
.clear Alias for .break
.editor Enter editor mode
.exit Exit the repl
.help Print this help message
.load Load JS from a file into the REPL session
.save Save all evaluated commands in this REPL session to a file
Press ^C to abort current expression, ^D to exit the repl
If ever you forget a command, you can always refer to .help
to see what it does.
Using .break
or .clear
, it’s easy to exit a multi-line expression. For example, begin a for loop
as follows:
- for (let i = 0; i < 100000000; i++) {
To exit from entering any more lines, instead of entering the next one, use the .break
or .clear
command to break out:
.break
You’ll see a new prompt:
>
The REPL will move on to a new line without executing any code, similar to pressing CTRL+C
.
The .save
command stores all the code you ran since starting the REPL, into a file. The .load
command runs all the JavaScript code from a file inside the REPL.
Quit the session using the .exit
command or with the CTRL+D
shortcut. Now start a new REPL with node
. Now only the code you are about to write will be saved.
Create an array with fruits:
- fruits = ['banana', 'apple', 'mango']
In the next line, the REPL will display:
[ 'banana', 'apple', 'mango' ]
Save this variable to a new file, fruits.js
:
- .save fruits.js
We’re greeted with the confirmation:
Session saved to: fruits.js
The file is saved in the same directory where you opened the Node.js REPL. For example, if you opened the Node.js REPL in your home directory, then your file will be saved in your home directory.
Exit the session and start a new REPL with node
. At the prompt, load the fruits.js
file by entering:
- .load fruits.js
This results in:
fruits = ['banana', 'apple', 'mango']
[ 'banana', 'apple', 'mango' ]
The .load
command reads each line of code and executes it, as expected of a JavaScript interpreter. You can now use the fruits
variable as if it was available in the current session all the time.
Type the following command and press ENTER
:
- fruits[1]
The REPL will output:
'apple'
You can load any JavaScript file with the .load
command, not only items you saved. Let’s quickly demonstrate by opening your preferred code editor or nano
, a command line editor, and create a new file called peanuts.js
:
- nano peanuts.js
Now that the file is open, type the following:
console.log('I love peanuts!');
Save and exit nano by pressing CTRL+X
.
In the same directory where you saved peanuts.js
, start the Node.js REPL with node
. Load peanuts.js
in your session:
- .load peanuts.js
The .load
command will execute the single console
statement and display the following output:
console.log('I love peanuts!');
I love peanuts!
undefined
>
When your REPL usage goes longer than expected, or you believe you have an interesting code snippet worth sharing or explore in more depth, you can use the .save
and .load
commands to make both those goals possible.
The REPL is an interactive environment that allows you to execute JavaScript code without first having to write it to a file.
You can use the REPL to try out JavaScript code from other tutorials:
]]>Node.js is a popular open-source runtime environment that can execute JavaScript outside of the browser using the V8 JavaScript engine, which is the same engine used to power the Google Chrome web browser’s JavaScript execution. The Node runtime is commonly used to create command line tools and web servers.
Learning Node.js will allow you to write your front-end code and your back-end code in the same language. Using JavaScript throughout your entire stack can help reduce time for context switching, and libraries are more easily shared between your back-end server and front-end projects.
Also, thanks to its support for asynchronous execution, Node.js excels at I/O-intensive tasks, which is what makes it so suitable for the web. Real-time applications, like video streaming, or applications that continuously send and receive data, can run more efficiently when written in Node.js.
In this tutorial you’ll create your first program with the Node.js runtime. You’ll be introduced to a few Node-specific concepts and build your way up to create a program that helps users inspect environment variables on their system. To do this, you’ll learn how to output strings to the console, receive input from the user, and access environment variables.
To complete this tutorial, you will need:
To write a “Hello, World!” program, open up a command line text editor such as nano
and create a new file:
- nano hello.js
With the text editor opened, enter the following code:
console.log("Hello World");
The console
object in Node.js provides simple methods to write to stdout
, stderr
, or to any other Node.js stream, which in most cases is the command line. The log
method prints to the stdout
stream, so you can see it in your console.
In the context of Node.js, streams are objects that can either receive data, like the stdout
stream, or objects that can output data, like a network socket or a file. In the case of the stdout
and stderr
streams, any data sent to them will then be shown in the console. One of the great things about streams is that they’re easily redirected, in which case you can redirect the output of your program to a file, for example.
Save and exit nano
by pressing CTRL+X
, when prompted to save the file, press Y
. Now your program is ready to run.
To run this program, use the node
command as follows:
- node hello.js
The hello.js
program will execute and display the following output:
OutputHello World
The Node.js interpreter read the file and executed console.log("Hello World");
by calling the log
method of the global console
object. The string "Hello World"
was passed as an argument to the log
function.
Although quotation marks are necessary in the code to indicate that the text is a string, they are not printed to the screen.
Having confirmed that the program works, let’s make it more interactive.
Every time you run the Node.js “Hello, World!” program, it produces the same output. In order to make the program more dynamic, let’s get input from the user and display it on the screen.
Command line tools often accept various arguments that modify their behavior. For example, running node
with the --version
argument prints the installed version instead of running the interpreter. In this step, you will make your code accept user input via command line arguments.
Create a new file arguments.js
with nano:
- nano arguments.js
Enter the following code:
console.log(process.argv);
The process
object is a global Node.js object that contains functions and data all related to the currently running Node.js process. The argv
property is an array of strings containing all the command line arguments given to a program.
Save and exit nano
by typing CTRL+X
, when prompted to save the file, press Y
.
Now when you run this program, you provide a command line argument like this:
- node arguments.js hello world
The output looks like the following:
Output[ '/usr/bin/node',
'/home/sammy/first-program/arguments.js',
'hello',
'world' ]
The first argument in the process.argv
array is always the location of the Node.js binary that is running the program. The second argument is always the location of the file being run. The remaining arguments are what the user entered, in this case: hello
and world
.
We are mostly interested in the arguments that the user entered, not the default ones that Node.js provides. Open the arguments.js
file for editing:
- nano arguments.js
Change console.log(process.arg);
to the following:
console.log(process.argv.slice(2));
Because argv
is an array, you can use JavaScript’s built-in slice
method that returns a selection of elements. When you provide the slice
function with 2
as its argument, you get all the elements of argv
that comes after its second element; that is, the arguments the user entered.
Re-run the program with the node
command and the same arguments as last time:
- node arguments.js hello world
Now, the output looks like this:
Output[ 'hello', 'world' ]
Now that you can collect input from the user, let’s collect input from the program’s environment.
Environment variables are key-value data stored outside of a program and provided by the OS. They are typically set by the system or user and are available to all running processes for configuration or state purposes. You can use Node’s process
object to access them.
Use nano
to create a new file environment.js
:
- nano environment.js
Add the following code:
console.log(process.env);
The env
object stores all the environment variables that are available when Node.js is running the program.
Save and exit like before, and run the environment.js
file with the node
command.
- node environment.js
Upon running the program, you should see output similar to the following:
Output{ SHELL: '/bin/bash',
SESSION_MANAGER:
'local/digitalocean:@/tmp/.ICE-unix/1003,unix/digitalocean:/tmp/.ICE-unix/1003',
COLORTERM: 'truecolor',
SSH_AUTH_SOCK: '/run/user/1000/keyring/ssh',
XMODIFIERS: '@im=ibus',
DESKTOP_SESSION: 'ubuntu',
SSH_AGENT_PID: '1150',
PWD: '/home/sammy/first-program',
LOGNAME: 'sammy',
GPG_AGENT_INFO: '/run/user/1000/gnupg/S.gpg-agent:0:1',
GJS_DEBUG_TOPICS: 'JS ERROR;JS LOG',
WINDOWPATH: '2',
HOME: '/home/sammy',
USERNAME: 'sammy',
IM_CONFIG_PHASE: '2',
LANG: 'en_US.UTF-8',
VTE_VERSION: '5601',
CLUTTER_IM_MODULE: 'xim',
GJS_DEBUG_OUTPUT: 'stderr',
LESSCLOSE: '/usr/bin/lesspipe %s %s',
TERM: 'xterm-256color',
LESSOPEN: '| /usr/bin/lesspipe %s',
USER: 'sammy',
DISPLAY: ':0',
SHLVL: '1',
PATH:
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin',
DBUS_SESSION_BUS_ADDRESS: 'unix:path=/run/user/1000/bus',
_: '/usr/bin/node',
OLDPWD: '/home/sammy' }
Keep in mind that many of the environment variables you see are dependent on the configuration and settings of your system, and your output may look substantially different than what you see here. Rather than viewing a long list of environment variables, you might want to retrieve a specific one.
In this step you’ll view environment variables and their values using the global process.env
object and print their values to the console.
The process.env
object is a simple mapping between environment variable names and their values stored as strings. Like all objects in JavaScript, you access an individual property by referencing its name in square brackets.
Open the environment.js
file for editing:
- nano environment.js
Change console.log(process.env);
to:
console.log(process.env["HOME"]);
Save the file and exit. Now run the environment.js
program:
- node environment.js
The output now looks like this:
Output/home/sammy
Instead of printing the entire object, you now only print the HOME
property of process.env
, which stores the value of the $HOME
environment variable.
Again, keep in mind that the output from this code will likely be different than what you see here because it is specific to your system. Now that you can specify the environment variable to retrieve, you can enhance your program by asking the user for the variable they want to see.
Next, you’ll use the ability to read command line arguments and environment variables to create a command line utility that prints the value of an environment variable to the screen.
Use nano
to create a new file echo.js
:
- nano echo.js
Add the following code:
const args = process.argv.slice(2);
console.log(process.env[args[0]]);
The first line of echo.js
stores all the command line arguments that the user provided into a constant variable called args
. The second line prints the environment variable stored in the first element of args
; that is, the first command line argument the user provided.
Save and exit nano
, then run the program as follows:
- node echo.js HOME
Now, the output would be:
Output/home/sammy
The argument HOME
was saved to the args
array, which was then used to find its value in the environment via the process.env
object.
At this point you can now access the value of any environment variable on your system. To verify this, try viewing the following variables: PWD
, USER
, PATH
.
Retrieving single variables is good, but letting the user specify how many variables they want would be better.
Currently, the application can only inspect one environment variable at a time. It would be useful if we could accept multiple command line arguments and get their corresponding value in the environment. Use nano
to edit echo.js
:
- nano echo.js
Edit the file so that it has the following code instead:
const args = process.argv.slice(2);
args.forEach(arg => {
console.log(process.env[arg]);
});
The forEach
method is a standard JavaScript method on all array objects. It accepts a callback function that is used as it iterates over every element of the array. You use forEach
on the args
array, providing it a callback function that prints the current argument’s value in the environment.
Save and exit the file. Now re-run the program with two arguments:
- node echo.js HOME PWD
You would see the following output:
[secondary_label Output]
/home/sammy
/home/sammy/first-program
The forEach
function ensures that every command line argument in the args
array is printed.
Now you have a way to retrieve the variables the user asks for, but we still need to handle the case where the user enters bad data.
To see what happens if you give the program an argument that is not a valid environment variable, run the following:
- node echo.js HOME PWD NOT_DEFINED
The output will look similar to the following:
[secondary_label Output]
/home/sammy
/home/sammy/first-program
undefined
The first two lines print as expected, and the last line only has undefined
. In JavaScript, an undefined
value means that a variable or property has not been assigned a value. Because NOT_DEFINED
is not a valid environment variable, it is shown as undefined
.
It would be more helpful to a user to see an error message if their command line argument was not found in the environment.
Open echo.js
for editing:
- nano echo.js
Edit echo.js
so that it has the following code:
const args = process.argv.slice(2);
args.forEach(arg => {
let envVar = process.env[arg];
if (envVar === undefined) {
console.error(`Could not find "${arg}" in environment`);
} else {
console.log(envVar);
}
});
Here, you have modified the callback function provided to forEach
to do the following things:
envVar
.envVar
is undefined
.envVar
is undefined
, then we print a helpful message indicating that it could not be found.Note: The console.error
function prints a message to the screen via the stderr
stream, whereas console.log
prints to the screen via the stdout
stream. When you run this program via the command line, you won’t notice the difference between the stdout
and stderr
streams, but it is good practice to print errors via the stderr
stream so that they can be easier identified and processed by other programs, which can tell the difference.
Now run the following command once more:
- node echo.js HOME PWD NOT_DEFINED
This time the output will be:
[secondary_label Output]
/home/sammy
/home/sammy/first-program
Could not find "NOT_DEFINED" in environment
Now when you provide a command line argument that’s not an environment variable, you get a clear error message stating so.
Your first program displayed “Hello World” to the screen, and now you have written a Node.js command line utility that reads user arguments to display environment variables.
If you want to take this further, you can change the behavior of this program even more. For example, you may want to validate the command line arguments before you print. If an argument is undefined, you can return an error, and the user will only get output if all arguments are valid environment variables.
If you’d like to continue learning Node.js, you can return to the How To Code in Node.js series, or browse programming projects and setups on our Node topic page.
]]>The concept of streams in computing usually describes the delivery of data in a steady, continuous flow. You can use streams for reading from or writing to a source continuously, thus eliminating the need to fit all the data in memory at once.
Using streams provides two major advantages. One is that you can use your memory efficiently since you do not have to load all the data into memory before you can begin processing. Another advantage is that using streams is time-efficient. You can start processing data almost immediately instead of waiting for the entire payload. These advantages make streams a suitable tool for large data transfer in I/O operations. Files are a collection of bytes that contain some data. Since files are a common data source in Node.js, streams can provide an efficient way to work with files in Node.js.
Node.js provides a streaming API in the stream
module, a core Node.js module, for working with streams. All Node.js streams are an instance of the EventEmitter
class (for more on this, see Using Event Emitters in Node.js). They emit different events you can listen for at various intervals during the data transmission process. The native stream
module provides an interface consisting of different functions for listening to those events that you can use to read and write data, manage the transmission life cycle, and handle transmission errors.
There are four different kinds of streams in Node.js. They are:
The file system module (fs
) is a native Node.js module for manipulating files and navigating the local file system in general. It provides several methods for doing this. Two of these methods implement the streaming API. They provide an interface for reading and writing files using streams. Using these two methods, you can create readable and writable file streams.
In this article, you will read from and write to a file using the fs.createReadStream
and fs.createWriteStream
functions. You will also use the output of one stream as the input of another and implement a custom transform steam. By performing these actions, you will learn to use streams to work with files in Node.js. To demonstrate these concepts, you will write a command-line program with commands that replicate the cat
functionality found in Linux-based systems, write input from a terminal to a file, copy files, and transform the content of a file.
To complete this tutorial, you will need:
fs
module, which you can find in How To Work with Files using the fs Module in Node.js.In this step, you will write a command-line program with basic commands. This command-line program will demonstrate the concepts you’ll learn later in the tutorial, where you’ll use these commands with the functions you’ll create to work with files.
To begin, create a folder to contain all your files for this program. In your terminal, create a folder named node-file-streams
:
- mkdir node-file-streams
Using the cd
command, change your working directory to the new folder:
- cd node-file-streams
Next, create and open a file called mycliprogram
in your favorite text editor. This tutorial uses GNU nano
, a terminal text editor. To use nano to create and open your file, type the following command:
- nano mycliprogram
In your text editor, add the following code to specify the shebang, store the array of command-line arguments from the Node.js process, and store the list of commands the application should have.
#!/usr/bin/env node
const args = process.argv;
const commands = ['read', 'write', 'copy', 'reverse'];
The first line contains a shebang, which is a path to the program interpreter. Adding this line tells the program loader to parse this program using Node.js.
When you run a Node.js script on the command-line, several command-line arguments are passed when the Node.js process runs. You can access these arguments using the argv
property or the Node.js process
. The argv
property is an array that contains the command-line arguments passed to a Node.js script. In the second line, you assign that property to a variable called args
.
Next, create a getHelpText
function to display a manual of how to use the program. Add the code below to your mycliprogram
file:
...
const getHelpText = function() {
const helpText = `
simplecli is a simple cli program to demonstrate how to handle files using streams.
usage:
mycliprogram <command> <path_to_file>
<command> can be:
read: Print a file's contents to the terminal
write: Write a message from the terminal to a file
copy: Create a copy of a file in the current directory
reverse: Reverse the content of a file and save its output to another file.
<path_to_file> is the path to the file you want to work with.
`;
console.log(helpText);
}
The getHelpText
function prints out the multi-line string you created as the help text for the program. The help text shows the command-line arguments or parameters that the program expects.
Next, you’ll add the control logic to check the length of args
and provide the appropriate response:
...
let command = '';
if(args.length < 3) {
getHelpText();
return;
}
else if(args.length > 4) {
console.log('More arguments provided than expected');
getHelpText();
return;
}
else {
command = args[2]
if(!args[3]) {
console.log('This tool requires at least one path to a file');
getHelpText();
return;
}
}
In the code snippet above, you have created an empty string command
to store the command received from the terminal. The first if
block checks whether the length of the args
array is less than 3. If it is less than 3, it means that no other additional arguments were passed when running the program. In this case, it prints the help text to the terminal and terminates.
The else if
block checks to see if the length of the args
array is greater than 4. If it is, then the program has received more arguments than it needs. The program will print a message to this effect along with the help text and terminate.
Finally, in the else
block, you store the third element or the element in the second index of the args
array in the command
variable. The code also checks whether there is a fourth element or an element with index = 3 in the args
array. If the item does not exist, it prints a message to the terminal indicating that you need a file path to continue.
Save the file. Then run the application:
- ./mycliprogram
You might get a permission denied
error similar to the output below:
Output-bash: ./mycliprogram: Permission denied
To fix this error, you will need to provide the file with execution permissions, which you can do with the following command:
- chmod +x mycliprogram
Re-run the file again. The output will look similar to this:
Outputsimplecli is a simple cli program to demonstrate how to handle files using streams.
usage:
mycliprogram <command> <path_to_file>
read: Print a file's contents to the terminal
write: Write a message from the terminal to a file
copy: Create a copy of a file in the current directory
reverse: Reverse the content of a file and save it output to another file.
Finally, you are going to partially implement the commands in the commands
array you created earlier. Open the mycliprogram
file and add the code below:
...
switch(commands.indexOf(command)) {
case 0:
console.log('command is read');
break;
case 1:
console.log('command is write');
break;
case 2:
console.log('command is copy');
break;
case 3:
console.log('command is reverse');
break;
default:
console.log('You entered a wrong command. See help text below for supported functions');
getHelpText();
return;
}
Any time you enter a command found in the switch statement, the program runs the appropriate case block for the command. For this partial implementation, you print the name of the command to the terminal. If the string is not in the list of commands you created above, the program will print out a message to that effect with the help text. Then the program will terminate.
Save the file, then re-run the program with the read
command and any file name:
- ./mycliprogram read test.txt
The output will look similar to this:
Outputcommand is read
You have now successfully created a command-line program. In the following section, you will replicate the cat
functionality as the read
command in the application using createReadStream()
.
createReadStream()
The read
command in the command-line application will read a file from the file system and print it out to the terminal similar to the cat
command in a Linux-based terminal. In this section, you will implement that functionality using createReadStream()
from the fs
module.
The createReadStream
function creates a readable stream that emits events that you can listen to since it inherits from the EventsEmitter
class. The data
event is one of these events. Every time the readable stream reads a piece of data, it emits the data
event, releasing a piece of data. When used with a callback function, it invokes the callback with that piece of data or chunk
, and you can process that data within that callback function. In this case, you want to display that chunk in the terminal.
To begin, add a text file to your working directory for easy access. In this section and some subsequent ones, you will be using a file called lorem-ipsum.txt
. It is a text file containing ~1200 lines of lorem ipsum text generated using the Lorem Ipsum Generator, and it is hosted on GitHub. In your terminal, enter the following command to download the file to your working directory:
- wget https://raw.githubusercontent.com/do-community/node-file-streams/999e66a11cd04bc59843a9c129da759c1c515faf/lorem-ipsum.txt
To replicate the cat
functionality in your command-line application, you’ll need to import the fs
module because it contains the createReadStream
function you need. To do this, open the mycliprogram
file and add this line immediately after the shebang:
#!/usr/bin/env node
const fs = require('fs');
Next, you will create a function below the switch
statement called read()
with a single parameter: the file path for the file you want to read. This function will create a readable stream from that file and listen for the data
event on that stream.
...
function read(filePath) {
const readableStream = fs.createReadStream(filePath);
readableStream.on('error', function (error) {
console.log(`error: ${error.message}`);
})
readableStream.on('data', (chunk) => {
console.log(chunk);
})
}
The code also checks for errors by listening for the error
event. When an error occurs, an error message will print to the terminal.
Finally, you should replace console.log()
with the read()
function in the first case block case 0
as shown in the code block below:
...
switch (command){
case 0:
read(args[3]);
break;
...
}
Save the file to persist the new changes and run the program:
- ./mycliprogram read lorem-ipsum.txt
The output will look similar to this:
Output<Buffer 0a 0a 4c 6f 72 65 6d 20 69 70 73 75 6d 20 64 6f 6c 6f 72 20 73 69 74 20 61 6d 65 74 2c 20 63 6f 6e 73 65 63 74 65 74 75 72 20 61 64 69 70 69 73 63 69 ... >
...
<Buffer 76 69 74 61 65 20 61 6e 74 65 20 66 61 63 69 6c 69 73 69 73 20 6d 61 78 69 6d 75 73 20 75 74 20 69 64 20 73 61 70 69 65 6e 2e 20 50 65 6c 6c 65 6e 74 ... >
Based on the output above, you can see that the data was read in chunks or pieces, and these pieces of data are of the Buffer
type. For the sake of brevity, the terminal output above shows only two chunks, and the ellipsis indicates that there are several buffers in between the chunks shown here. The larger the file, the greater the number of buffers or chunks.
To return the data in a human-readable format, you will set the encoding type of the data by passing the string value of the encoding type you want as a second argument to the createReadStream()
function. In the second argument to the createReadStream()
function, add the following highlighted code to set the encoding type to utf8
.
...
const readableStream = fs.createReadStream(filePath, 'utf8')
...
Re-running the program will display the contents of the file in the terminal. The program prints the lorem ipsum text from the lorem-ipsum.txt
file line by line as it appears in the file.
OutputLorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean est tortor, eleifend et enim vitae, mattis condimentum elit. In dictum ex turpis, ac rutrum libero tempus sed...
...
...Quisque nisi diam, viverra vel aliquam nec, aliquet ut nisi. Nullam convallis dictum nisi quis hendrerit. Maecenas venenatis lorem id faucibus venenatis. Suspendisse sodales, tortor ut condimentum fringilla, turpis erat venenatis justo, lobortis egestas massa massa sed magna. Phasellus in enim vel ante viverra ultricies.
The output above shows a small fraction of the content of the file printed to the terminal. When you compare the terminal output with the lorem-ipsum.txt
file, you will see that the content is the same and takes the same formatting as the file, just like with the cat
command.
In this section, you implemented the cat
functionality in your command-line program to read the content of a file and print it to the terminal using the createReadStream
function. In the next step, you will create a file based on the input from the terminal using createWriteStream()
.
createWriteStream()
In this section, you will write input from the terminal to a file using createWriteStream()
. The createWriteStream
function returns a writable file stream that you can write data to. Like the readable stream in the previous step, this writable stream emits a set of events like error
, finish
, and pipe
. Additionally, it provides the write
function for writing data to the stream in chunks or bits. The write
function takes in the chunk
, which could be a string, a Buffer
, <Uint8Array>
, or any other JavaScript value. It also allows you to specify an encoding type if the chunk is a string.
To write input from a terminal to a file, you will create a function called write
in your command-line program. In this function, you will create a prompt that receives input from the terminal (until the user terminates it) and writes the data to a file.
First, you will need to import the readline
module at the top of the mycliprogram
file. The readline
module is a native Node.js module that you can use to receive data from a readable stream like the standard input (stdin
) or your terminal one line at a time. Open your mycliprogram
file and add the highlighted line :
#!/usr/bin/env node
const fs = require('fs');
const readline = require('readline');
Then, add the following code below the read()
function.
...
function write(filePath) {
const writableStream = fs.createWriteStream(filePath);
writableStream.on('error', (error) => {
console.log(`An error occured while writing to the file. Error: ${error.message}`);
});
}
Here, you are creating a writable stream with the filePath
parameter. This file path will be the command-line argument after the write
word. You are also listening for the error event if anything goes wrong (for example, if you provide a filePath
that does not exist).
Next, you will write the prompt to receive a message from the terminal and write it to the specified filePath
using the readline
module you imported earlier. To create a readline interface, a prompt, and to listen for the line
event, update the write
function as shown in the block:
...
function write(filePath) {
const writableStream = fs.createWriteStream(filePath);
writableStream.on('error', (error) => {
console.log(`An error occured while writing to the file. Error: ${error.message}`);
});
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
prompt: 'Enter a sentence: '
});
rl.prompt();
rl.on('line', (line) => {
switch (line.trim()) {
case 'exit':
rl.close();
break;
default:
sentence = line + '\n'
writableStream.write(sentence);
rl.prompt();
break;
}
}).on('close', () => {
writableStream.end();
writableStream.on('finish', () => {
console.log(`All your sentences have been written to ${filePath}`);
})
setTimeout(() => {
process.exit(0);
}, 100);
});
}
You created a readline interface (rl
) that allows the program to read the standard input (stdin
) from your terminal on a line-by-line basis and write a specified prompt
string to standard output (stdout
). You also called the prompt()
function to write the configured prompt
message to a new line and to allow the user to provide additional input.
Then you chained two event listeners together on the rl
interface. The first one listens for the line
event emitted each time the input stream receives an end-of-line input. This input could be a line feed character (\n
), the carriage return character (\r
), or both characters together (\r\n
), and it usually occurs when you press the ENTER
or return ↵
key on your computer. Therefore, any time you press either of these keys while typing in the terminal, the line
event is emitted. The callback function receives a string containing the single line of input line
.
You trimmed the line and checked to see if it is the word exit
. If not, the program will add a new line character to line
and write the sentence
to the filePath
using the .write()
function. Then you called the prompt function to prompt the user to enter another line of text. If the line
is exit
, the program calls the close
function on the rl
interface. The close
function closes the rl
instance and releases the standard input (stdin
) and output (stdout
) streams.
This function brings us to the second event you listened for on the rl
instance: the close
event. This event is emitted when you call rl.close()
. After writing data to a stream, you have to call the end
function on the stream to tell your program that it should no longer write data to the writable stream. Doing this will ensure that the data is completely flushed to your output file. Therefore, when you type the word exit
, you close the rl
instance and stop your writable stream by calling the end
function.
To provide feedback to the user that the program has successfully written all the text from the terminal to the specified filePath
, you listened for the finish
event on writableStream
. In the callback function, you logged a message to the terminal to inform the user when writing is complete. Finally, you exited the process after 100ms to provide enough time for the finish
event to provide feedback.
Finally, to call this function in your mycliprogram
, replace the console.log
statement in the case 1
block in the switch
statement with the new write
function, as shown here:
...
switch (command){
...
case 1:
write(args[3]);
break;
...
}
Save the file containing the new changes. Then run the command-line application in your terminal with the write
command.
- ./mycliprogram write output.txt
At the Enter a sentence
prompt, add any input you’d like. After a couple of entries, type exit
.
The output will look similar to this (with your input displaying instead of the highlighted lines):
OutputEnter a sentence: Twinkle, twinkle, little star
Enter a sentence: How I wonder what you are
Enter a sentence: Up above the hills so high
Enter a sentence: Like a diamond in the sky
Enter a sentence: exit
All your sentences have been written to output.txt
Check output.txt
to see the file content using the read
command you created earlier.
- ./mycliprogram read output.txt
The terminal output should contain all the text you have typed into the command except exit
. Based on the input above, the output.txt
file has the following content:
OutputTwinkle, twinkle, little star
How I wonder what you are
Up above the hills so high
Like a diamond in the sky
In this step, you wrote to a file using streams. Next, you will implement the function that copies files in your command-line program.
pipe()
In this step, you will use the pipe
function to create a copy of a file using streams. Although there are other ways to copy files using streams, using pipe
is preferred because you don’t need to manage the data flow.
For example, one way to copy files using streams would be to create a readable stream for the file, listen to the stream on the data
event, and write each chunk
from the stream event to a writable stream of the file copy. The snippet below shows an example:
const fs = require('fs');
const readableStream = fs.createReadStream('lorem-ipsum.txt', 'utf8');
const writableStream = fs.createWriteStream('lorem-ipsum-copy.txt');
readableStream.on('data', () => {
writableStream.write(chunk);
});
writableStream.end();
The disadvantage to this method is that you need to manage the events on both the readable and writeable streams.
The preferred method for copying files using streams is to use pipe
. A plumbing pipe passes water from a source such as a water tank (output) to a faucet or tap (input). Similarly, you use pipe
to direct data from an output stream to an input stream. (If you are familiar with the Linux-based bash shell, the pipe |
command directs data from one stream to another.)
Piping in Node.js provides the ability to read data from a source and write it somewhere else without managing the data flow as you would using the first method. Unlike the previous approach, you do not need to manage the events on both the readable and writable streams. For this reason, it is a preferred approach for implementing a copy command in your command-line application that uses streams.
In the mycliprogram
file, you will add a new function invoked when a user runs the program with the copy
command-line argument. The copy method will use pipe()
to copy from an input file to the destination copy of the file. Create the copy
function after the write
function as shown below:
...
function copy(filePath) {
const inputStream = fs.createReadStream(filePath)
const fileCopyPath = filePath.split('.')[0] + '-copy.' + filePath.split('.')[1]
const outputStream = fs.createWriteStream(fileCopyPath)
inputStream.pipe(outputStream)
outputStream.on('finish', () => {
console.log(`You have successfully created a ${filePath} copy. The new file name is ${fileCopyPath}.`);
})
}
In the copy function, you created an input or readable stream using fs.createReadStream()
. You also generated a new name for the destination, output a copy of the file, and created an output or writable stream using fs.createWriteStream()
. Then you piped the data from the inputStream
to the outputStream
using .pipe()
. Finally, you listened for the finish
event and printed out a message on a successful file copy.
Recall that to close a writable stream, you have to call the end()
function on the stream. When piping streams, the end()
function is called on the writable stream (outputStream
) when the readable stream (inputStream
) emits the end
event. The end()
function of the writable stream emits the finish
event, and you listen for this event to indicate that you have finished copying a file.
To see this function in action, open the mycliprogram
file and update the case 2
block of the switch
statement as shown below:
...
switch (command){
...
case 2:
copy(args[3]);
break;
...
}
Calling the copy
function in the case 2
block of the switch
statements ensures that when you run the mycliprogram
program with the copy
command and the required file paths, the copy
function is executed.
Run mycliprogram
:
- ./mycliprogram copy lorem-ipsum.txt
The output will look similar to this:
OutputYou have successfully created a lorem-ipsum-copy.txt copy. The new file name is lorem-ipsum-copy.txt.
Within the node-file-streams
folder, you will see a newly added file with the name lorem-ipsum-copy.txt
.
You have successfully added a copy function to your command-line program using pipe
. In the next step, you will use streams to modify the content of a file.
Transform()
In the previous three steps of this tutorial, you have worked with streams using the fs
module. In this section, you will modify file streams using the Transform()
class from the native stream
module, which provides a transform stream. You can use a transform stream to read data, manipulate the data, and provide new data as output. Thus, the output is a ‘transformation’ of the input data. Node.js modules that use transform streams include the crypto
module for cryptography and the zlib
module with gzip
for compressing and uncompressing files.
You are going to implement a custom transform stream using the Transform()
abstract class. The transform stream you create will reverse the contents of a file line by line, which will demonstrate how to use transform streams to modify the content of a file as you want.
In the mycliprogram
file, you will add a reverse
function that the program will call when a user passes the reverse
command-line argument.
First, you need to import the Transform()
class at the top of the file below the other imports. Add the highlighted line as shown below:
#!/usr/bin/env node
...
const stream = require('stream');
const Transform = stream.Transform || require('readable-stream').Transform;
In Node.js versions earlier than v0.10
, the Transform
abstract class is missing. Therefore, the code block above includes the readable-streams
polyfill so that this program can work with earlier versions of Node.js. If the Node.js version is > 0.10
the program uses the abstract class, and if not, it uses the polyfill.
Note: If you are using a Node.js version < 0.10
, you will have to run npm init -y
to create a package.json
file and install the polyfill using npm install readable-stream
to your working directory for the polyfill to be applied.
Next, you will create the reverse
function right under your copy
function. In that function, you will create a readable stream using the filePath
parameter, generate a name for the reversed file, and create a writable stream using that name. Then you create reverseStream
, an instance of the Transform()
class. When you call the Transform()
class, you pass in an object containing one function. This important function is the transform
function.
Beneath the copy
function, add the code block below to add the reverse
function.
...
function reverse(filePath) {
const readStream = fs.createReadStream(filePath);
const reversedDataFilePath = filePath.split('.')[0] + '-reversed.'+ filePath.split('.')[1];
const writeStream = fs.createWriteStream(reversedDataFilePath);
const reverseStream = new Transform({
transform (data, encoding, callback) {
const reversedData = data.toString().split("").reverse().join("");
this.push(reversedData);
callback();
}
});
readStream.pipe(reverseStream).pipe(writeStream).on('finish', () => {
console.log(`Finished reversing the contents of ${filePath} and saving the output to ${reversedDataFilePath}.`);
});
}
The transform
function receives three parameters: data
, encoding
type, and a callback
function. Within this function, you converted the data to a string, split the string, reversed the contents of the resultant array, and joined them back together. This process rewrites the data backward instead of forward.
Next, you connected the readStream
to the reverseStream
and finally to the writeStream
using two pipe()
functions. Finally, you listened for the finish
event to alert the user when the file contents have been completely reversed.
You will notice that the code above uses another syntax for listening for the finish
event. Instead of listening for the finish
event for the writeStream
on a new line, you chained the on
function to the second pipe
function. You can chain some event listeners on a stream. In this case, doing this has the same effect as calling the on('finish')
function on the writeStream
.
To wrap things up, replace the console.log
statement in the case 3
block of the switch
statement with reverse()
.
...
switch (command){
...
case 3:
reverse(args[3]);
break;
...
}
To test this function, you will use another file containing the names of countries in alphabetical order (countries.csv
). You can download it to your working directory by running the command below.
- wget https://raw.githubusercontent.com/do-community/node-file-streams/999e66a11cd04bc59843a9c129da759c1c515faf/countries.csv
You can then run mycliprogram
.
- ./mycliprogram reverse countries.csv
The output will look similar to this:
OutputFinished reversing the contents of countries.csv and saving the output to countries-reversed.csv.
Compare the contents of countries-reversed.csv
with countries.csv
to see the transformation. Each name is now written backward, and the order of the names has also been reversed (“Afghanistan” is written as “natsinahgfA” and appears last, and “Zimbabwe” is written as “ewbabmiZ” and appears first).
You have successfully created a custom transform stream. You have also created a command-line program with functions that use streams for file handling.
Streams are used in native Node.js modules and in various yarn
and npm
packages that perform input/output operations because they provide an efficient way to handle data. In this article, you used various stream-based functions to work with files in Node.js. You built a command-line program with read
, write
, copy
, and reverse
commands. Then you implemented each of these commands in functions named accordingly. To implement the functions, you used functions like createReadStream
, createWriteStream
, pipe
from the fs
module, the createInterface
function from the readline
module, and finally the abstract Transform()
class. Finally, you pieced these functions together in a small command-line program.
As a next step, you could extend the command-line program you created to include other file system functionality you might want to use locally. A good example could be writing a personal tool to convert data from .tsv
stream source to .csv
or attempting to replicate the wget
command you used in this article to download files from GitHub.
The command-line program you have written handles command-line arguments itself and uses a simple prompt to get user input. You can learn more about building more robust and maintainable command-line applications by following How To Handle Command-line Arguments in Node.js Scripts and How To Create Interactive Command-line Prompts with Inquirer.js.
Additionally, Node.js provides extensive documentation on the various Node.js stream module classes, methods, and events you might need for your use case.
]]>End-to-end testing (e2e for short) is a process in which the entire lifecycle of an application is tested from a user’s perspective in a production-like scenario. This process typically involves deploying a script to automatically navigate through the application’s interface as a normal user would, testing for specific features and behaviors along the way. In Node.js development, you can use a combination of the Chrome API Puppeteer and the JavaScript testing framework Jest to automate e2e testing, allowing you to ensure that the user interface (UI) of your application is still functioning as you fix bugs and add new features.
In this tutorial, you will write an e2e test that validates that the account creation and login features of a sample webpage work as intended. First, you will write a basic Puppeteer script to open up a browser and navigate to a test webpage, then you will configure presets that make the browser and page instance globally available. Next, you will clone the mock-auth
sample application from the DigitalOcean Community repository on GitHub and serve the application locally. This sample application will provide the user with an interface to create an account and log in to that account. Finally, you will adjust your Puppeteer scripts to fill the account creation and login forms and click the submit buttons, then you will write unit tests in Jest to validate that the scripts work as expected.
Warning: The ethics and legality of web scraping are complex and constantly evolving. They also differ based on your location, the data’s location, and the website in question. This tutorial scrapes a locally served sample application that was specifically designed to test scraper applications. Scraping any other domain falls outside the scope of this tutorial.
Before you begin this guide you’ll need the following:
14.16.0
or greater installed on your computer. To install this on macOS or Ubuntu 20.04, follow the steps in How To Install Node.js and Create a Local Development Environment on macOS or the Option 2 — Installing Node.js with Apt Using a NodeSource PPA section of How To Install Node.js on Ubuntu 20.04.In this step, you will create a directory for the Node.js testing program and install the required dependencies. This tutorial uses three dependencies, which you will install using npm, Node.js’s default package manager. These dependencies will enable you to use Jest and Puppeteer together.
First, create a folder for this project and navigate into that folder. This tutorial will use end-to-end-test-tutorial
as the name of the project:
- mkdir end-to-end-test-tutorial
- cd end-to-end-test-tutorial
You will run all subsequent commands in this directory.
You can now initialize npm in your directory so that it can keep track of your dependencies. Initialize npm for your project with the following command:
- npm init
This command will present a sequence of prompts. You can press ENTER
to every prompt, or you can add personalized descriptions. Make sure to press ENTER
and leave the default values in place when prompted for entry point:
and test command:
. Alternately, you can pass the -y
flag to npm
and it will submit all the default values for you.
Filling out these prompts will create a package.json
file for you, which will manage your project’s dependencies and some additional metadata.
You will then be prompted to save the file. Type yes
and press ENTER
. npm will save this output as your package.json
file.
Now you can install your dependencies. The three dependencies you need for this tutorial are:
jest
: A unit testing library.puppeteer
: A high-level abstraction API over Chrome Devtools protocol.jest-puppeteer
: A package that helps you set up Jest properly with Puppeteer.Install these dependencies using npm:
- npm install --save jest puppeteer jest-puppeteer
When you run this command, it will install Jest, Puppeteer, a compatible version of the Chromium browser, and the jest-puppeteer
library.
Note: On Linux machines, Puppeteer might require some additional dependencies. If you are using Ubuntu 20.04, check the Debian Dependencies dropdown inside the Chrome headless doesn’t launch on UNIX section of Puppeteer’s troubleshooting docs. You can also use the following command to help find any missing dependencies:
- ldd ~/end-to-end-test-tutorial/node_modules/puppeteer/.local-chromium/linux-970485/chrome-linux/chrome | grep not
In this command you are using ldd
on your project’s installation of Chrome to find the program’s dependencies, then piping the results to grep
to find all the dependencies that contain the word not
. This will show you the dependencies that are not installed. Note that your individual path to the chrome
module may vary.
With the required dependencies installed, your package.json
file will have them included as a part of its dependencies
. You can verify this by opening it in your preferred text editor:
- nano package.json
This will show you a file similar to the following:
{
"name": "end-to-end-test-tutorial",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"jest": "^27.5.1",
"jest-puppeteer": "^6.1.0",
"puppeteer": "^13.5.0"
}
}
This confirms that the dependencies have been installed.
With your testing program initiated and the dependencies set up, you will next configure it and add in an initial Puppeteer script to ensure that everything has been set up properly.
The manual way of testing a website is to use a browser to surf through a web page, clicking buttons, scrolling through pages, and confirming that the correct pages and texts are being rendered on each interaction. These are the same procedures involved in writing automated end-to-end tests: a browser will programmatically open a web page and navigate the interface, and a testing library will assert that the browser got the expected behavior or output from the web page. In this step, you will configure Jest and Puppeteer to carry out these procedures in your Node.js application, then test the configuration with a Puppeteer script that visits www.google.com
.
First, create a few folders to give structure to your testing application:
- mkdir actions
- mkdir logs
- mkdir specs
- mkdir utils
The actions
folder will hold the Puppeteer scripts that will crawl your local web page, specs
will hold the tests themselves, and utils
will hold helper files like mock credential generation. Although not used in this tutorial, it is a best practice to create a logs
folder to hold the results of your tests.
Once you’ve made these directories, create and open a jest.config.js
file in your preferred editor:
- nano jest.config.js
Add these configurations to the file:
module.exports = {
preset: 'jest-puppeteer',
roots: [ 'specs' ],
};
This is a Jest configuration file, set up to tell Jest to use the preset
configuration of the jest-puppeteer
library you installed. It also designates the specs
folder as the location of the test scripts you will write later in this tutorial.
Save and exit from the file. Next, from the root directory, create and open the jest-puppeteer.config.js
file:
- nano jest-puppeteer.config.js
Add these configurations to the file:
module.exports = {
launch: {
headless: false,
args: [ "--window-size=1366,768" ],
},
browser: 'chromium',
}
These configurations define how the browser will open to test the web page. headless
determines if the browser runs with or without an interface; in this case, you are configuring Puppeteer to open the window in your desktop environment. args
is used to pass relevant Puppeteer arguments to the browser instance. In this case, you’re using it to specify the window size of the browser opened. Finally, browser
specifies the browser to use. For this tutorial, this is Chromium, but it could be Firefox if you have puppeteer-firefox
installed.
Save and exit the file.
With the specs
folder set as the folder that holds all of your tests, you will now create a basic test file in that folder.
Navigate to the specs
folder and create a users.test.js
file. Open the users.test.js
file and add the following code to test the functionality of your application:
jest.setTimeout(60000)
describe('Basic authentication e2e tests', () => {
beforeAll( async () => {
// Set a definite size for the page viewport so view is consistent across browsers
await page.setViewport( {
width: 1366,
height: 768,
deviceScaleFactor: 1
} );
await page.goto('https://www.google.com');
await page.waitFor(5000);
} );
it( 'Should be truthy', async () => {
expect( true ).toBeTruthy();
})
});
In this code, you first set Jest’s default timeout to 60 seconds with the setTimeout()
method. Jest has a default timeout of five seconds at which a test must pass or fail, or the test will return an error. Since browser interactions often take longer than five seconds to run, you set it to 60 seconds to accommodate the time lapse.
Next, the describe
block groups related tests with each other using the describe
keyword. Within that, the beforeAll
script allows you to run specific code before every test in this block. This holds code like variables that are local to this test block but global to all tests it contains. Here, you are using it to set the viewport of the page opened in the browser to the size of the browser specified in jest-puppeteer.config.js
. Then you use the page
object to navigate to www.google.com
and wait for five seconds, so that you can see the page after it loads and before the browser closes. The page
object is globally available in all test suites.
Next, you created a mock test to validate that the script you have written works. It checks if the boolean true
is a truthy value, which will always be the case if all is working correctly.
Save and exit from the file.
Now, you need a way to run the test to see if it works as expected. Open your package.json
file and modify the scripts
section as shown in the following code block:
{
"name": "Doe's-e2e-test",
"version": "1.0.0",
"description": "An end to end test",
"main": "index.js",
"scripts": {
"e2e": "jest"
},
"keywords": [],
"author": "John Doe",
"license": "MIT"
}
This will make the keyword e2e
call the jest
command to run the test. After the file has been saved, run your e2e test in your terminal with the following command:
- npm run e2e
Once you run this command, a new browser window will open, navigate to Google, then print the following output on the console:
Output
> jest
PASS specs/users.test.js (13.772s)
Basic authentication e2e tests
√ Should be truthy (3ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 14.221s, estimated 16s
Ran all test suites.
This shows that both Puppeteer and Jest are working properly. You’re now ready to start writing tests for web pages. But first, you will set up the sample web page so that you have an interface to test.
In this step, you will clone a sample application from the DigitalOcean Community GitHub repository, then run it locally with the Live Server development server. This will give you a user interface to test with your Node.js application.
Note: This tutorial will only inspect this sample UI from a user’s perspective; the development required to build out the user interface is beyond the scope of this tutorial. If you are interested in developing UI login authentication pages, check out our How To Add Login Authentication to React Applications tutorial.
First, open a new terminal and run the following git
command outside of your testing application:
- git clone https://github.com/do-community/mock-auth.git
This will clone the user interface from the DigitalOcean repository. This code contains HTML, CSS, and JavaScript that create a mock authentication interface.
Next, install Live Server globally with the following command:
- npm install -g live-server
Note: On some systems such as Ubuntu 20.04, installing an npm package globally can result in a permission error, which will interrupt the installation. Since it is a security best practice to avoid using sudo
with npm install
, you can instead resolve this by changing npm’s default directory. If you encounter an EACCES
error, follow the instructions at the official npm documentation.
Live Server is a light development server with live reload capability. It will take your static HTML pages and make them available on localhost
.
Next, navigate into the mock-auth
sample interface:
- cd mock-auth
Then start the application on a local development server with the following command:
- live-server
A web page will open in your default browser at http://127.0.0.1:8080
that will render as the following image:
This is the UI that your testing program will interact with. If you click on the Login button, the browser will load a login form with fields for Username and Password. Selecting the Create Account button will lead you to a Create Account form, with fields for Fullname, Username, and Password. In the next steps, you will write tests to navigate through each of these interfaces, ensuring that account creation and logging in work as expected.
When you create an account on websites, the most common behavior is that the website navigates you to a welcome page that has your name and some relevant information about your newly created account. In this step, you will validate that the account creation page on the sample web application works in this way. To do this, you will write a script in the actions
folder to navigate the interface, then write a test that uses that action to validate the functionality.
First, return to the terminal that has your testing program in it and create a script to crawl the Create Account page of the sample web application. This tutorial will name this file createAccount.js
:
- nano actions/createAccount.js
Once this file is open, add in the following code:
const chalk = require( 'chalk' );
class createAccount {
constructor( page ) {
this.url = "http://127.0.0.1:8080/"
this.page = page;
this.signupBtn = '#signup';
this.signupBody = '#signupBody';
this.fullnameField = '#fullname';
this.usernameField = '#username';
this.passwordField = '#password';
this.loginPageBtn = '#loginBtn';
this.signupPageBtn = '#signupBtn';
}
}
module.exports = ( page ) => new createAccount( page );
This snippet first imports the chalk
module, which will be used later to format error messages in the terminal. This is not necessary, but will make your error reporting more legible. Next, you create a class called createAccount
. The constructor takes a page
parameter, then sets the homepage URL for the sample web application and sets the properties of the object as the ID references to the HTML elements your program will interact with on the DOM. Finally, your code exports a function that creates a new instance of the createAccount
class.
Next, you will add a signup
method to the createAccount
class that will help you perform various actions on the page. Add the following highlighted code to your file:
...
this.signupPageBtn = '#signupBtn';
}
async signup( fullname, username, password ) {
try {
await this.page.goto( this.url );
await this.page.waitFor( this.signupBtn );
await this.page.click( this.signupBtn );
// Wait for the signupBody on the signup page to load
await this.page.waitFor( this.signupBody );
// Type the login credentials into the input fields
await this.page.type( this.fullnameField, fullname );
await this.page.waitFor( 1000 );
await this.page.type( this.usernameField, username );
await this.page.waitFor( 1000 );
await this.page.type( this.passwordField, password );
await this.page.waitFor( 1000 );
// Click then create account button
await this.page.click( this.signupPageBtn );
// Wait for homepage to load
await this.page.waitFor( '#firstname' );
await this.page.waitFor( 2000 );
const firstname = await this.page.$eval( '#homeBody #firstname', el => el.textContent );
return firstname;
} catch ( err ) {
console.log( chalk.red( 'ERROR => ', err ) );
}
}
}
module.exports = ( page ) => new createAccount( page );
Here, you are declaring the signup
method as asynchronous with the async
keyword. This function uses a try...catch
block to go to the URL of the web application and navigate through the interface. Some of the methods used on the page
object are:
page.goto(url)
: Navigates the browser to a specified URL.
page.waitFor(milliseconds or element)
: Delays other actions on the page for the specified milliseconds, or until an element has loaded.
page.click(selector)
: Clicks on an element on the page.
page.type(selector, text)
: Types text in a specified input field.
page.$eval(selector, callback(element))
: Selects an element and runs the callback function on it.
With these methods, the signup
function first navigates the page to the base URL, then waits for the signup button to load. It then clicks this button and waits for the body of the signup form to load. It types in the fullname, username, and password in their respective fields at intervals of one second. Then, it clicks the signup button and waits for the welcome page to load. The page.$eval()
method is used to fetch the name displayed on the welcome page, which is returned from this method.
Save and exit from the file.
Now, you will write tests to validate that the account creation works as expected. But before you proceed, you have to decide what credentials to create a new account with. For this, you will create a new module.
Create a credentials.js
file in the utils
folder:
- nano utils/credentials.js
Add the following code to the file:
module.exports = ( user ) => {
let username = `${user}-${Math.random()}`
let password = `${Math.random()}`;
// Make sure both usernames and passwords are strings
username = String( username );
password = String( password );
const fullname = "John Doe"
let credential = { fullname, username, password };
return credential;
}
This code generates a random username, password, and a hard-coded fullname, then returns the generated credentials as a JSON object. You can change the hard-coded name to your name of choice, but often the unique entity when creating an account is the username.
Save and exit from credentials.js
.
Next, navigate to the specs
folder and open the users.test.js
file in your editor. Modify the code as shown here:
let credentials = require( '../utils/credentials' );
jest.setTimeout(60000);
describe('Basic authentication e2e tests', () => {
let credential;
beforeAll( async () => {
// Set a definite site for the page viewport so view is consistent across browsers
await page.setViewport( {
width: 1366,
height: 768,
deviceScaleFactor: 1
} );
credential = credentials( 'User' );
await page.goto('https://www.google.com');
await page.waitFor(5000);
} );
it( 'Should be truthy', async () => {
expect( true ).toBeTruthy();
})
} );
Here you imported the credentials
module that you created earlier, then created a credential
variable globally available to all tests in that block and assigned the generated credentials to that variable using the beforeAll
block, which runs before every test in this block.
Now, you can write the test that actually does the validation by modifying the code as shown here:
let credentials = require( '../utils/credentials' );
let createAccount = require( '../actions/createAccount' );
jest.setTimeout(60000);
describe('Basic authentication e2e tests', () => {
let credential;
beforeAll( async () => {
// Set a definite site for the page viewport so view is consistent across browsers
await page.setViewport( {
width: 1366,
height: 768,
deviceScaleFactor: 1
} );
credential = credentials( 'User' );
createAccount = await createAccount( page );
} );
it( 'Should be able to create an account', async () => {
const firstname = await createAccount.signup( credential.fullname, credential.username, credential.password );
page.waitFor( 1000 );
expect( credential.fullname ).toContain( firstname );
})
} );
You have now imported the createAccount
module and called the signup
method to get the fullname
displayed on the welcome page once the program has navigated the interface. The code then asserts that this fullname
is the same as the fullname
generated before the test method was called.
Save the script, then run it using the command npm run e2e
:
- npm run e2e
The Chrome browser will open and automatically create an account with the generated credentials. Once the test is finished, the following output will be logged to your console:
Output> jest
PASS specs/users.test.js (28.881s)
Basic authentication e2e tests
√ Should be able to create an account (26273ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 29.087s
Ran all test suites.
This script has now validated the account creation process.
In this step, you wrote a script that crawls the sample web application and creates an account automatically. You then asserted that the process works as expected by writing a unit test for the crawler scripts. In the next step, you will do the same for the login feature.
In this step, you will assert that the login feature works as it should. This step is similar to the create account step in that you will first create a web crawler script to navigate the login page, then write a unit test to confirm that the functionality is working as expected.
First, create and open a loginAccount.js
file in your preferred editor:
- nano actions/loginAccount.js
Then add the following code to traverse the login page:
const chalk = require( 'chalk' );
class LoginAccount {
constructor( page ) {
this.url = "http://127.0.0.1:8080/"
this.page = page;
this.loginBtn = '#login';
this.loginBody = '#loginBody';
this.usernameField = '#username';
this.passwordField = '#password';
this.loginPageBtn = '#loginBtn';
}
async login( username, password ) {
try {
await this.page.goto( this.url );
await this.page.waitFor( this.loginBtn );
await this.page.click( this.loginBtn );
// Wait for the loginBody on the login page to load
await this.page.waitFor( this.loginBody );
// Type the login credentials into the input fields
await this.page.type( this.usernameField, username );
await this.page.waitFor( 1000 );
await this.page.type( this.passwordField, password );
await this.page.waitFor( 1000 );
await this.page.click( this.loginPageBtn );
// Wait for homepage to load
await this.page.waitFor( '#firstname' );
await this.page.waitFor( 2000 );
const firstname = await this.page.$eval( '#homeBody #firstname', el => el.textContent );
return firstname;
} catch ( err ) {
console.log( chalk.red( 'ERROR => ', err ) );
}
}
}
module.exports = ( page ) => new LoginAccount( page );
This code is similar to the createAccount.js
file. First, you created a LoginAccount
class and exported a function that takes in the page
object as a parameter. The constructor contains ID references to several HTML elements to interact with on the DOM.
The LoginAccount
class has an asynchronous login
method that takes in the username
and password
as parameters and helps you perform various actions on the page. The code first navigates to the URL of your sample web application, then clicks the button that loads the login page. When the login page has been loaded, it fills the form with the username and password passed in to the login
method, then submits it by clicking the Login button. If the login was successful, it loads the welcome page and returns the first name, which you will pass in to your unit test.
Save and exit from the file.
Next, open up your users.test.js
file again and modify it as follows:
let credentials = require( '../utils/credentials' );
let createAccount = require( '../actions/createAccount' );
let loginAccount = require( '../actions/loginAccount' );
jest.setTimeout(60000);
describe('Basic authentication e2e tests', () => {
let credential;
beforeAll( async () => {
// Set a definite site for the page viewport so view is consistent across browsers
await page.setViewport( {
width: 1366,
height: 768,
deviceScaleFactor: 1
} );
credential = credentials( 'User' );
createAccount = await createAccount( page );
loginAccount = await loginAccount( page );
} );
it( 'Should be able to create an account', async () => {
const firstname = await createAccount.signup( credential.fullname,
credential.username, credential.password );
page.waitFor( 1000 );
expect( credential.fullname ).toContain( firstname );
})
it( 'Should be able to log in after a successful account creation', async () => {
const firstname = await loginAccount.login( credential.username, credential.password );
page.waitFor( 1000 );
expect( credential.fullname ).toContain( firstname );
} );
} );
In this code, you imported the loginAccount
module, called the web crawler function on the page
, then created a new test assertion that passes if the name on the Login page is contained in the generated credentials.
Save the file, then run npm run e2e
from the terminal:
- npm run e2e
The web crawler will open a browser, navigate to the Login page, and enter the credentials, then the test script will run to find out if the web crawler made it to the welcome page.
The following will be logged to the terminal:
Output> jest
PASS specs/users.test.js (48.96s)
Basic authentication e2e tests
√ Should be able to create an account (21534ms)
√ Should be able to log in after a successful account creation (12899ms)
Test Suites: 1 passed, 1 total
Tests: 2 passed, 2 total
Snapshots: 0 total
Time: 52.426s
Ran all test suites.
This shows that the test for a successful login passes as expected. However, the test is not yet complete; the program still needs to be able to handle unsuccessful login attempts.
If a wrong username and password combination is provided, an alert prompt pops up with the message Invalid username or password inputted. To test the alert box message, you can listen to a dialog
event on the page
object. The presence of the alert box indicates that an unsuccessful login attempt was just made.
To implement this, modify the users.test.js
script as shown here:
let credentials = require( '../utils/credentials' );
let createAccount = require( '../actions/createAccount' );
let loginAccount = require( '../actions/loginAccount' );
jest.setTimeout(60000);
describe('Basic authentication e2e tests', () => {
let credential;
beforeAll( async () => {
// Set a definite site for the page viewport so view is consistent across browsers
await page.setViewport( {
width: 1366,
height: 768,
deviceScaleFactor: 1
} );
credential = credentials( 'User' );
createAccount = await createAccount( page );
loginAccount = await loginAccount( page );
} );
it( 'Should be able to create an account', async () => {
const firstname = await createAccount.signup( credential.fullname, credential.username, credential.password );
page.waitFor( 1000 );
expect( credential.fullname ).toContain( firstname );
})
it( 'Should be able to log in after a successful account creation', async () => {
const firstname = await loginAccount.login( credential.username,
credential.password );
page.waitFor( 1000 );
expect( credential.fullname ).toContain( firstname );
} );
it( 'Should not login on wrong credentials', async () => {
try {
page.on( 'dialog', dialog => {
expect( dialog.message() ).toBe( 'Invalid username or password inputted' );
dialog.accept();
});
await page.goto( 'http://127.0.0.1:8080/login.html' );
await page.type( '#username', 'username' );
await page.type( '#password', 'password' );
await page.click( '#loginBtn' );
await page.waitFor(5000) //Wait for the dialog to accept the prompt before proceeding
} catch(err){
console.log("An error occured while trying to login => ", err)
}
})
} );
In this code, you’ve added a new assertion that first sets up an event listener for the dialog
event before performing any page interactions. If the web crawler clicks the button before listening to the dialog
event, the dialog
would have popped up before the event was bubbled.
Next, the code navigates to the login.html
page and enters username
and password
as the credentials. Since these credentials do not match with those entered when you created an account, this will cause an error, which will trigger the dialog box that your assertion is waiting for. Finally, note that you added a five second delay at the end. This is to ensure that the dialog
event accepts the dialog before jest-puppeteer
closes the page. The page is closed once there are no longer tests available to run.
Save your users.test.js
file and run the test:
- npm run e2e
Next, observe that all tests pass:
Output PASS specs/users.test.js (25.736 s)
Basic authentication e2e tests
✓ Should be able to create an account (11987 ms)
✓ Should be able to log in after a successful account creation (8196 ms)
✓ Should not login on wrong credentials (5168 ms)
Test Suites: 1 passed, 1 total
Tests: 3 passed, 3 total
Snapshots: 0 total
Time: 25.826 s, estimated 27 s
Ran all test suites.
This shows that the sample web application is working as expected.
In this tutorial, you used Puppeteer and Jest to write automated tests for a sample web application with account creation and login functionality. You configured Puppeteer and Jest to work together, then wrote scripts to navigate the web application UI and return the values of HTML elements it encountered. Finally, you tested whether those values matched the expected values of the actions you were testing.
End-to-end testing is not only a useful way to test your UI; you can also use it to ascertain that other key functionalities in your web application work as expected. For example, you can use device emulation and network throttling to run performance tests across several devices. For more information on end-to-end testing, check out the official docs for Puppeteer and Jest.
]]>Discord is a chat application that allows millions of users across the globe to message and voice chat online in communities called guilds or servers. Discord also provides an extensive API that developers can use to build powerful Discord bots. Bots can perform various actions such as sending messages to servers, DM-ing users, moderating servers, and playing audio in voice chats. This allows developers to craft powerful bots that include advanced, complex features like moderation tools or even games. For example, the utility bot Dyno serves millions of guilds and contains useful features such as spam protection, a music player, and other utility functions. Learning how to create Discord bots allows you to implement many possibilities, which thousands of people could interact with every day.
In this tutorial, you will build a Discord bot from scratch, using Node.js and the Discord.js library, which allows users to directly interact with the Discord API. You’ll set up a profile for a Discord bot, get authentication tokens for the bot, and program the bot with the ability to process commands with arguments from users.
Before you get started, you will need the following:
Node.js installed on your development machine. To install this on macOS or Ubuntu 20.04, follow the steps in How to Install Node.js and Create a Local Development Environment on macOS or the Installing Node.js with Apt Using a NodeSource PPA section of How To Install Node.js on Ubuntu 20.04.
Any text editor of your choice, such as Visual Studio Code, Atom, Sublime, or nano.
A free Discord account with a verified email account and a free Discord server you will use to test your Discord bot.
In this step, you’ll use the Discord developers graphical user interface (GUI) to set up a Discord bot and get the bot’s token, which you will pass into your program.
In order to register a bot on the Discord platform, use the Discord application dashboard. Here developers can create Discord applications including Discord bots.
To get started, click New Application. Discord will ask you to enter a name for your new application. Then click Create to create the application.
Note: The name for your application is independent from the name of the bot, and the bot doesn’t have to have the same name as the application.
Now open up your application dashboard. To add a bot to the application, navigate to the Bot tab on the navigation bar to the left.
Click the Add Bot button to add a bot to the application. Click the Yes, do it! button when it prompts you for confirmation. You will then be on a dashboard containing details of your bot’s name, authentication token, and profile picture.
You can modify your bot’s name or profile picture here on the dashboard. You also need to copy the bot’s authentication token by clicking Click to Reveal Token and copying the token that appears.
Warning: Never share or upload your bot token as it allows anyone to log in to your bot.
Now you need to create an invite to add the bot to a Discord guild where you can test it. First, navigate to the URL Generator page under the OAuth2 tab of the application dashboard. To create an invite, scroll down and select bot under scopes. You must also set permissions to control what actions your bot can perform in guilds. For the purposes of this tutorial, select Administrator, which will give your bot permission to perform nearly all actions in guilds. Copy the link with the Copy button.
Next, add the bot to a server. Follow the invite link you just created. You can add the bot to any server you own, or have administrator permissions in, from the drop-down menu.
Now click Continue. Ensure you have the tickbox next to Administrator ticked—this will grant the bot administrator permissions. Then click Authorize. Discord will ask you to solve a CAPTCHA before the bot joins the server. You’ll now have the Discord bot on the members list in the server you added the bot to under offline.
You’ve successfully created a Discord bot and added it to a server. Next, you will write a program to log in to the bot.
In this step, you’ll set up the basic coding environment where you will build your bot and log in to the bot programmatically.
First, you need to set up a project folder and necessary project files for the bot.
Create your project folder:
- mkdir discord-bot
Move into the project folder you just created:
- cd discord-bot
Next, use your text editor to create a file named config.json
to store your bot’s authentication token:
- nano config.json
Then add the following code to the config file, replacing the highlighted text with your bot’s authentication token:
{
"BOT_TOKEN": "YOUR BOT TOKEN"
}
Save and exit the file.
Next you’ll create a package.json
file, which will store details of your project and information about the dependencies you’ll use for the project. You’ll create a package.json
file by running the following npm
command:
- npm init
npm
will ask you for various details about your project. If you would like guidance on completing these prompts, you can read about them in How To Use Node.js Modules with npm and package.json.
You’ll now install the discord.js
package that you will use to interact with the Discord API. You can install discord.js
through npm with the following command:
- npm install discord.js
Now that you’ve set up the configuration file and installed the necessary dependency, you’re ready to begin building your bot. In a real-world application, a large bot would be split across many files, but for the purposes of this tutorial, the code for your bot will be in one file.
First, create a file named index.js
in the discord-bot
folder for the code:
- nano index.js
Begin coding the bot by requiring the discord.js
dependency and the config file with the bot’s token:
const Discord = require("discord.js");
const config = require("./config.json");
Following this, add the next two lines of code:
...
const client = new Discord.Client({intents: ["GUILDS", "GUILD_MESSAGES"]});
client.login(config.BOT_TOKEN);
Save and exit your file.
The first line of code creates a new Discord.Client
and assigns it to the constant client
. This client is partly how you will interact with the Discord API and how Discord will notify you of events such as new messages. The client, in effect, represents the Discord bot. The object passed into the Client
constructor specifies the gateway intents of your bot. This defines which WebSocket events your bot will listen to. Here you have specified GUILDS
and GUILD_MESSAGES
to enable the bot to receive message events in guilds.
The second line of code uses the login
method on the client
to log in to the Discord bot you created, using the token in the config.json
file as a password. The token lets the Discord API know which bot the program is for and that you’re authenticated to use the bot.
Now, execute the index.js
file using Node:
- node index.js
Your bot’s status will change to online in the Discord server you added it to.
You’ve successfully set up a coding environment and created the basic code for logging in to a Discord bot. In the next step you’ll handle user commands and get your bot to perform actions, such as sending messages.
In this step, you will create a bot that can handle user commands. You will implement your first command ping
, which will respond with "pong"
and the time taken to respond to the command.
First, you need to detect and receive any messages users send so you can process any commands. Using the on
method on the Discord client, Discord will send you a notification about new events. The on
method takes two arguments: the name of an event to wait for and a function to run every time that event occurs. With this method you can wait for the event message
—this will occur every time a message is sent to a guild where the bot has permission to view messages. Therefore you will create a function that runs every time a message is sent to process commands.
First open your file:
- nano index.js
Add the following code to your file:
...
const client = new Discord.Client({intents: ["GUILDS", "GUILD_MESSAGES"]});
client.on("messageCreate", function(message) {
});
client.login(config.BOT_TOKEN);
This function, which runs on the messageCreate
event, takes message
as a parameter. message
will have the value of a Discord.js message instance, which contains information about the sent message and methods to help the bot respond.
Now add the following line of code to your command-handling function:
...
client.on("messageCreate", function(message) {
if (message.author.bot) return;
});
...
This line checks if the author of the message is a bot, and if so, stops processing the command. This is important as generally you don’t want to process, or respond to, bots’ messages. Bots usually don’t need to use information from other bots, so ignoring their messages saves processing power and helps prevent accidental replies.
Now you’ll write a command handler. To accomplish this, it’s good to understand the usual format of a Discord command. Typically, the structure of a Discord command contains three parts in the following order: a prefix, a command name, and (sometimes) command arguments.
Prefix: the prefix can be anything, but is typically a piece of punctuation or abstract phrase that wouldn’t normally be at the start of a message. This means that when you include the prefix at the start of the message, the bot will know that the intention for this command is for a bot to process it.
Command name: The name of the command the user wants to use. This means the bot can support multiple commands with different functionality and allow users to choose between them by supplying a different command name.
Arguments: Sometimes if the command requires or uses extra information from the user, the user can supply arguments after the command name, with each argument separated by a space.
Note: There is no enforced command structure and bots can process commands how they like, but the structure presented here is an efficient structure that the vast majority of bots use.
To begin creating a command parser that handles this format, add the following lines of code to the message handling function:
...
const prefix = "!";
client.on("messageCreate", function(message) {
if (message.author.bot) return;
if (!message.content.startsWith(prefix)) return;
});
...
You add the first line of code to assign the value "!"
to the constant prefix
, which you will use as the bot’s prefix.
The second line of code you add checks if the content of the message the bot is processing begins with the prefix you set, and if it doesn’t, stops the message from continuing to process.
Now you must convert the rest of the message into a command name and any arguments that may exist in the message. Add the following highlighted lines:
...
client.on("messageCreate", function(message) {
if (message.author.bot) return;
if (!message.content.startsWith(prefix)) return;
const commandBody = message.content.slice(prefix.length);
const args = commandBody.split(' ');
const command = args.shift().toLowerCase();
});
...
You use the first line here to remove the prefix from the message content and assign the result to the constant commandBody
. This is necessary as you don’t want to include the prefix in the parsed command name.
The second line takes the message with the removed prefix and uses the split
method on it, with a space as the separator. This splits it into an array of sub-strings, making a split wherever there is a space. This results in an array containing the command name, then, if included in the message, any arguments. You assign this array to the constant args
.
The third line removes the first element from the args
array (which will be the command name provided), converts it to lowercase, and then assigns it to the constant command
. This allows you to isolate the command name and leave only arguments in the array. You also use the method toLowerCase
as commands are typically case insensitive in Discord bots.
You’ve completed building a command parser, implementing a required prefix, and getting the command name and any arguments from messages. You will now implement and create the code for the specific commands.
Add the following code to start implementing the ping
command:
...
const args = commandBody.split(' ');
const command = args.shift().toLowerCase();
if (command === "ping") {
}
});
...
This if
statement checks if the command name you parsed (assigned to the constant command
) matches "ping"
. If it does, that indicates the user wants to use the "ping"
command. You will nest the code for the specific command inside the if
statement block. You will repeat this pattern for other commands you want to implement.
Now, you can implement the code for the "ping"
command:
...
if (command === "ping") {
const timeTaken = Date.now() - message.createdTimestamp;
message.reply(`Pong! This message had a latency of ${timeTaken}ms.`);
}
...
Save and exit your file.
You add the "ping"
command block that calculates the difference between the current time—found using the now
method on the Date
object—and the timestamp when the message was created in milliseconds. This calculates how long the message took to process and the "ping"
of the bot.
The second line responds to user’s command using the reply
method on the message
constant. The reply
method pings (which notifies the user and highlights the message for the specified user) the user who invoked the command, followed by the content provided as the first argument to the method. You provide a template literal containing a message and the calculated ping as the response that the reply
method will use.
This concludes implementing the "ping"
command.
Run your bot using the following command (in the same folder as index.js
):
- node index.js
You can now use the command "!ping"
in any channel the bot can view and message in, resulting in a response.
You have successfully created a bot that can handle user commands and you have implemented your first command. In the next step, you will continue developing your bot by implementing a sum command.
Now you will extend your program by implementing the "!sum"
command. The command will take any number of arguments and add them together, before returning the sum of all the arguments to the user.
If your Discord bot is still running, you can stop its process with CTRL + C
.
Open your index.js
file again:
- nano index.js
To begin implementing the "!sum"
command you will use an else-if
block. After checking for the ping command name, it will check if the command name is equal to "sum"
. You will use an else-if
block since only one command will process at a time, so if the program matches the command name "ping"
, it doesn’t have to check for the "sum"
command. Add the following highlighted lines to your file:
...
if (command === "ping") {
const timeTaken = Date.now() - message.createdTimestamp;
message.reply(`Ping! This message had a latency of ${timeTaken}ms.`);
}
else if (command === "sum") {
}
});
...
You can begin implementing the code for the "sum"
command. The code for the "sum"
command will go inside the else-if
block you just created. Now, add the following code:
...
else if (command === "sum") {
const numArgs = args.map(x => parseFloat(x));
const sum = numArgs.reduce((counter, x) => counter += x);
message.reply(`The sum of all the arguments you provided is ${sum}!`);
}
...
You use the map
method on the arguments list to create a new list by using the parseFloat
function on each item in the args
array. This creates a new array (assigned to the constant numArgs
) in which all of the items are numbers instead of strings. This means later you can successfully find the sum of the numbers by adding them together.
The second line uses the reduce
method on the constant numArgs
providing a function that totals all the elements in the list. You assign the sum of all the elements in numArgs
to the constant sum
.
You then use the reply
method on the message object to reply to the user’s command with a template literal, which contains the sum of all the arguments the user sends to the bot.
This concludes implementing the "sum"
command. Now run the bot using the following command (in the same folder as index.js
):
- node index.js
You can now use the "!sum"
command in any channel the bot can view and message in.
The following is a completed version of the index.js
bot script:
const Discord = require("discord.js");
const config = require("./config.json");
const client = new Discord.Client({intents: ["GUILDS", "GUILD_MESSAGES"]});
const prefix = "!";
client.on("messageCreate", function(message) {
if (message.author.bot) return;
if (!message.content.startsWith(prefix)) return;
const commandBody = message.content.slice(prefix.length);
const args = commandBody.split(' ');
const command = args.shift().toLowerCase();
if (command === "ping") {
const timeTaken = Date.now() - message.createdTimestamp;
message.reply(`Pong! This message had a latency of ${timeTaken}ms.`);
}
else if (command === "sum") {
const numArgs = args.map(x => parseFloat(x));
const sum = numArgs.reduce((counter, x) => counter += x);
message.reply(`The sum of all the arguments you provided is ${sum}!`);
}
});
client.login(config.BOT_TOKEN);
In this step, you have further developed your Discord bot by implementing the sum
command.
You have successfully implemented a Discord bot that can handle multiple, different user commands and command arguments. If you want to expand on your bot, you could possibly implement more commands or try out more parts of the Discord API to craft a powerful Discord bot. You can review the Discord.js documentation or the Discord API documentation to expand your knowledge of the Discord API. In particular, you could convert your bot commands to slash commands, which is a best practice for Discord.js v13.
While creating Discord bots, you must always keep in mind the Discord API terms of service, which outlines how developers must use the Discord API. If you would like to learn more about Node.js, check out our How To Code in Node.js series.
]]>Node.js is a JavaScript platform for general-purpose programming that allows users to build network applications quickly. By leveraging JavaScript on both the front and backend, Node.js makes development more consistent and integrated.
In this guide, you’ll learn about three different methods to install Node.js on an Ubuntu 18.04 server.
This guide assumes that you are using Ubuntu 18.04. Before you begin, you should have a non-root user account with sudo
privileges set up on your system. You can learn how to do this by following the initial server setup tutorial for Ubuntu 18.04.
Ubuntu 18.04 contains a version of Node.js in its default repositories that can be used to provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is 8.10.0. This will not be the latest version, but it should be stable and sufficient for quick experimentation with the language.
To get this version, you can use the apt
package manager. Refresh your local package index:
- sudo apt update
Now install Node.js:
- sudo apt install nodejs
Verify you’ve installed Node.js successfully by querying node
for its version number:
- node -v
Outputv8.10.0
If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. In most cases, you’ll also want to install npm
, the Node.js package manager. You can install the npm
package with apt
:
- sudo apt install npm
This will allow you to install modules and packages to use with Node.js.
You’ve now successfully installed Node.js and npm
using apt
and the default Ubuntu software repositories. However, you may prefer to work with different versions of Node.js, package archives, or version managers. The next steps will discuss these elements, along with more flexible and robust methods of installation.
To install a more recent version of Node.js you can add the PPA (personal package archive) maintained by NodeSource. This will have more up-to-date versions of Node.js than the official Ubuntu repositories and will allow you to choose between several available versions of the platform.
First, install the PPA in order to get access to its contents. From your home directory, use curl
to retrieve the installation script for your preferred version, making sure to replace 17.x
with your preferred version string (if different):
- cd ~
- curl -sL https://deb.nodesource.com/setup_17.x -o /tmp/nodesource_setup.sh
You can refer to the NodeSource documentation for more information on currently available versions.
If you’d like, you can inspect the contents of this script with nano
(or your preferred text editor):
- nano /tmp/nodesource_setup.sh
Once you’re satisfied the script is safe to run, exit the text editor. If you used nano
, you can exit by pressing CTRL + X
. Next, run the script with sudo
:
- sudo bash /tmp/nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. Now you can install the Node.js package as you did in the previous section:
- sudo apt install nodejs
Verify you’ve installed the new version by running node
with the -v
flag:
- node -v
Outputv17.3.0
Unlike the one in the default Ubuntu package repositories, this nodejs
package contains both node
and npm
, so you don’t need to install npm
separately.
npm
uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm
. Run the following command to verify that npm
is installed and to create the configuration file:
- npm -v
Output8.3.0
In order for some npm
packages to work (those that require compiling code from source, for example), you need to install the build-essential
package:
- sudo apt install build-essential
Now you have the necessary tools to work with npm
packages that require compiling code from source.
In this section, you successfully installed Node.js and npm
using apt
and the NodeSource PPA. Next, you’ll use the Node Version Manager to install and manage multiple versions of Node.js.
An alternative for installing Node.js is to use a tool called nvm
, the Node Version Manager (NVM). Rather than working at the operating system level, nvm
works at the level of an independent directory within your home directory. This means that you can install multiple self-contained versions of Node.js without affecting the entire system.
Controlling your environment with nvm
allows you to access the newest versions of Node.js and retain and manage previous releases. It is a different utility from apt
, however, and the versions of Node.js that you manage with it are distinct from the versions you manage with apt
.
To install NVM on your Ubuntu 18.04 machine, visit the project’s GitHub page. Copy the curl
command from the README file that displays on the main page to get the most recent version of the installation script.
Before piping the command through to bash
, it is always a good idea to audit the script to make sure it isn’t doing anything you don’t agree with. You can do that by removing the | bash
segment at the end of the curl
command:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh
Review the output and make sure you are comfortable with the changes it is making. Once you’re satisfied, run the same command with | bash
appended at the end. The URL you use will change depending on the latest version of NVM, but as of right now, the script can be downloaded and executed by running the following:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
This installs the nvm
script to your user account. In order to use it, first source the .bashrc
file:
- source ~/.bashrc
With nvm
installed, you can install isolated Node.js versions. First, ask nvm
what versions of Node are available:
- nvm ls-remote
Output...
v14.18.2 (Latest LTS: Fermium)
v15.0.0
v15.0.1
v15.1.0
v15.2.0
v15.2.1
v15.3.0
v15.4.0
v15.5.0
v15.5.1
v15.6.0
v15.7.0
v15.8.0
v15.9.0
v15.10.0
v15.11.0
v15.12.0
v15.13.0
v15.14.0
v16.0.0
v16.1.0
v16.2.0
v16.3.0
v16.4.0
v16.4.1
v16.4.2
v16.5.0
v16.6.0
v16.6.1
v16.6.2
v16.7.0
v16.8.0
v16.9.0
v16.9.1
v16.10.0
v16.11.0
v16.11.1
v16.12.0
v16.13.0 (LTS: Gallium)
v16.13.1 (Latest LTS: Gallium)
v17.0.0
v17.0.1
v17.1.0
v17.2.0
v17.3.0
It’s a very long list, but you can install a version of Node by inputting any of the released versions listed. For example, to get version v16.13.1, run the following:
- nvm install v16.13.1
OutputNow using node v16.13.1 (npm v8.1.2)
Sometimes nvm
will switch to use the most recently installed version. But you can tell nvm
to use the version you just downloaded (if different):
- nvm use v16.13.1
Check the version currently being used by running the following:
- node -v
Outputv16.13.1
If you have multiple Node versions installed, you can run ls
to get a list of them:
- nvm ls
Output-> v16.13.1
system
default -> v16.13.1
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v16.13.1) (default)
stable -> 16.13 (-> v16.13.1) (default)
lts/* -> lts/gallium (-> v16.13.1)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.8 (-> N/A)
lts/fermium -> v14.18.2 (-> N/A)
lts/gallium -> v16.13.1
You can also default to one of the versions:
- nvm alias default 16.13.1
Outputdefault -> 16.13.1 (-> v16.13.1)
This version will be automatically selected when a new session spawns. You can also reference it by the alias like in the following command:
- nvm use default
OutputNow using node v16.13.1 (npm v8.1.2)
Each version of Node will keep track of its own packages and has npm
available to manage these.
You can also have npm
install packages to the Node.js project’s ./node_modules
directory. Use the following syntax to install the express
module:
- npm install express
Outputadded 50 packages, and audited 51 packages in 4s
2 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
npm notice
npm notice New minor version of npm available! 8.1.2 -> 8.3.0
npm notice Changelog: https://github.com/npm/cli/releases/tag/v8.3.0
npm notice Run npm install -g npm@8.3.0 to update!
npm notice
If you’d like to install the module globally, making it available to other projects using the same version of Node.js, you can add the -g
flag:
- npm install -g express
Outputadded 50 packages, and audited 51 packages in 1s
2 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
This will install the package in:
- ~/.nvm/versions/node/16.13.1/lib/node_modules/express
Installing the module globally will let you run commands from the command line, but you’ll have to link the package into your local sphere to require it from within a program:
- npm link express
You can learn more about the options available to you with nvm
by running the following:
- nvm help
You’ve successfully installed Node by using the Node Version Manager, nvm
, to install and manage various versions of Node.
You can uninstall Node.js using apt
or nvm
, depending on the version you want to target. To remove the default repository version, you will use apt
at the system level. This command removes the package and retains the configuration files. This is useful if you plan to install the package again in the future:
- sudo apt remove nodejs
If you don’t want to save the configuration files for later use, then run the following command to uninstall the package and remove the configuration files associated with it:
sudo apt purge nodejs
As a final step, you can remove any unused packages that were automatically installed with the removed package:
- sudo apt autoremove
To uninstall a version of Node.js that you have enabled using nvm
, first determine whether or not the version you would like to remove is the current active version:
- nvm current
If the version you are targeting is not the current active version, you can run:
- nvm uninstall node_version
OutputUninstalled node node_version
This command will uninstall the selected version of Node.js.
If the version you would like to remove is the current active version, you must first deactivate nvm
to enable your changes:
- nvm deactivate
Now you can uninstall the current version using the uninstall
command used previously. This removes all files associated with the targeted version of Node.js except the cached files that can be used for reinstallment.
There are quite a few ways to get up and running with Node.js on your Ubuntu 18.04 server. Your circumstances will dictate which of the methods is best for your needs. While using the packaged version in Ubuntu’s repository is one method, using nvm
or a NodeSource PPA offers additional flexibility.
For more information on programming with Node.js, please refer to our tutorial series How To Code in Node.js.
]]>Node is a run-time environment that makes it possible to write server-side JavaScript. It has gained widespread adoption since its release in 2011. Writing server-side JavaScript can be challenging as a codebase grows due to the nature of the JavaScript language: dynamic and weak typed.
Developers coming to JavaScript from other languages often complain about its lack of strong static typing, but this is where TypeScript comes into the picture, to bridge this gap.
TypeScript is a typed (optional) super-set of JavaScript that can help with building and managing large-scale JavaScript projects. It can be thought of as JavaScript with additional features like strong static typing, compilation, and object oriented programming.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
Note: TypeScript is technically a super-set of JavaScript, which means that all JavaScript code is valid TypeScript code.
Here are some benefits of using TypeScript:
In this tutorial you will set up a Node project with TypeScript. You will build an Express application using TypeScript and transpile it down to JavaScript code.
Before you begin this guide, you will need Node.js installed on your system. You can accomplish this by following the How to Install Node.js and Create a Local Development Environment guide for your operating system.
To get started, create a new folder named node_project
and move into that directory:
- mkdir node_project
- cd node_project
Next, initialize it as an npm project:
- npm init -y
The -y
flag tells npm init
to automatically say “yes” to the defaults. You can always update this information later in your package.json
file.
Now that your npm project is initialized, you are ready to install and set up TypeScript.
Run the following command from inside your project directory to install the TypeScript:
- npm install --save-dev typescript
Outputadded 1 package, and audited 2 packages in 1s
found 0 vulnerabilities
TypeScript uses a file called tsconfig.json
to configure the compiler options for a project. Create a tsconfig.json
file in the root of the project directory:
- nano tsconfig.json
Then paste in the following JSON:
{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist"
},
"lib": ["es2015"]
}
Let’s go over some of the keys in the JSON snippet above:
module
: Specifies the module code generation method. Node uses commonjs
.target
: Specifies the output language level.moduleResolution
: This helps the compiler figure out what an import refers to. The value node
mimics the Node module resolution mechanism.outDir
: This is the location to output .js
files after transpilation. In this tutorial you will save it as dist
.To learn more about the key value options available, the official TypeScript documentation offers explanations of every option.
Now, it is time to install the Express framework and create a minimal server:
- npm install --save express@4.17.1
- npm install -save-dev @types/express@4.17.1
The second command installs the Express types for TypeScript support. Types in TypeScript are files, normally with an extension of .d.ts
. The files are used to provide type information about an API, in this case the Express framework.
This package is required because TypeScript and Express are independent packages. Without the @types/express
package, there is no way for TypeScript to know about the types of Express classes.
Next, create a src
folder in the root of your project directory:
- mkdir src
Then create a TypeScript file named app.ts
within it:
- nano src/app.ts
Open up the app.ts
file with a text editor of your choice and paste in the following code snippet:
import express from 'express';
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.send('Hello World!');
});
app.listen(port, () => {
return console.log(`Express is listening at http://localhost:${port}`);
});
The code above creates Node Server that listens on the port 3000
for requests. To run the app, you first need to compile it to JavaScript using the following command:
- npx tsc
This uses the configuration file we created in the previous step to determine how to compile the code and where to place the result. In our case, the JavaScript is output to the dist
directory.
Run the JavaScript output with node
:
- node dist/app.js
If it runs successfully, a message will be logged to the terminal:
- OutputExpress is listening at http://localhost:3000
Now, you can visit http://localhost:3000
in your browser and you should see the message:
- OutputHello World!
Open the dist/app.js
file and you will find the transpiled version of the TypeScript code:
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const app = (0, express_1.default)();
const port = 3000;
app.get('/', (req, res) => {
res.send('Hello World!');
});
app.listen(port, () => {
return console.log(`Express is listening at http://localhost:${port}`);
});
//# sourceMappingURL=app.js.map
At this point you have successfully set up your Node project to use TypeScript. Next you’ll set up the eslint linter to check your TypeScript code for errors.
Now you can configure TypeScript linting for the project. First, we install eslint
using npm
:
- npm install --save-dev eslint
Then, run eslint
’s initialization command to interactively set up the project:
- npx eslint --init
This will ask you a series of questions. For this project we’ll answer the following:
Finally, you will be prompted to install some additioanl eslint libraries. Choose Yes
. The process will finish and you’ll be left with the following configuration file:
module.exports = {
env: {
es2021: true,
node: true,
},
extends: ['eslint:recommended', 'plugin:@typescript-eslint/recommended'],
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 13,
sourceType: 'module',
},
plugins: ['@typescript-eslint'],
rules: {},
}
Run the linter to check all files with the .ts
TypeScript extension:
- npx eslint . --ext .ts
You’ve now set up the eslint linter to check your TypeScript syntax. Next you’ll update your npm configuration to add some convenient scripts for linting and running your project.
package.json
FileIt can be useful to put your commonly run command line tasks into npm scripts. npm scripts are defined in your package.json
file and can be run with the command npm run your_script_name
.
In this step you will add a start
script that will transpile the TypeScript code then run the resulting .js
application.
You will also add a lint
script to run the eslint linter on your TypeScript files.
Open the package.json
file and update it accordingly:
{
"name": "node_project",
"version": "1.0.0",
"description": "",
"main": "dist/app.js",
"scripts": {
"start": "tsc && node dist/app.js",
"lint": "eslint . --ext .ts",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"@types/express": "^4.17.1",
"@typescript-eslint/eslint-plugin": "^5.4.0",
"@typescript-eslint/parser": "^5.4.0",
"eslint": "^8.3.0",
"typescript": "^4.5.2"
},
"dependencies": {
"express": "^4.17.1"
}
}
In the snippet above, you updated the main
path to be the compiled app output, and added the start
and lint
commands to the scripts section.
When looking at the start
command, you’ll see that first the tsc
command is run, and then the node
command. This will compile and then run the generated output with node
.
The lint
command is the same as we ran in the previous step, minus the use of the npx
prefix which is not needed in this context.
In this tutorial, you learned about why TypeScript is useful for writing reliable JavaScript code. You also learned about some of benefits to working with TypeScript.
Finally, you set up a Node project using the Express framework, but compiled and ran the project using TypeScript.
]]>Working with files is one of the common tasks among developers. As your files grow in size, they start taking significant space on your hard drive. Sooner or later you may need to transfer the files to other servers or upload multiple files from your local machine to different platforms. Some of these platforms have file size limits, and won’t accept large files. To get around this, you can group the files into a single ZIP file. A ZIP file is an archive format that packs and compresses files with the lossless compression algorithm. The algorithm can reconstruct the data without any data loss. In Node.js, you can use the adm-zip
module to create and read ZIP archives.
In this tutorial, you will use adm-zip
module to compress, read, and decompress files. First, you’ll combine multiple files into a ZIP archive using adm-zip
. You’ll then list the ZIP archive contents. After that, you’ll add a file to an existing ZIP archive, and then finally, you’ll extract a ZIP archive into a directory.
To follow this tutorial, you’ll need:
Node.js installed on your local or server environment. Follow How to Install Node.js and Create a Local Development Environment to install Node.js.
Knowledge of how to write a Node.js program, see How To Write and Run Your First Program in Node.js.
A basic understanding of asynchronous programming in JavaScript. Visit our tutorial Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript to learn the basics.
Knowledge of how to work with files in Node.js. See the tutorial How To Work with Files using the fs Module in Node.js to review working with files.
In this step, you’ll create the directory for your project and install adm-zip
as a dependency. This directory is where you’ll keep your program files. You’ll also create another directory containing text files and an image. You’ll archive this directory in the next section.
Create a directory called zip_app
with the following command:
- mkdir zip_app
Navigate into the newly created directory with the cd
command:
- cd zip_app
Inside the directory, create a package.json
file to manage the project dependencies:
- npm init -y
The -y
option creates a default package.json
file.
Next, install adm-zip
with the npm install
command:
- npm install adm-zip
After you run the command, npm
will install adm-zip
and update the package.json
file.
Next, create a directory called test
and move into it:
- mkdir test && cd test
In this directory, you will create three text files and download an image. The three files will be filled with dummy content to make their file sizes larger. This will help to demonstrate ZIP compression when you archive this directory.
Create the file1.txt
and fill it with dummy content using the following command:
- yes "dummy content" | head -n 100000 > file1.txt
The yes
command logs the string dummy content
repeatedly. Using the pipe command |
, you send the output from the yes
command to be used as input for the head
command. The head
command prints part of the given input into the standard output. The -n
option specifies the number of lines that should be written to the standard output. Finally, you redirect the head
output to a new file file1.txt
using >
.
Create a second file with the string “dummy content” repeated 300,000 lines:
- yes "dummy content" | head -n 300000 > file2.txt
Create another file with the dummy content
string repeated 600,000 lines:
- yes "dummy content" | head -n 600000 > file3.txt
Finally, download an image into the directory using curl
:
- curl -O https://assets.digitalocean.com/how-to-process-images-in-node-js-with-sharp/underwater.png
Move back into the main project directory with the following command:
- cd ..
The ..
will move you to the parent directory, which is zip_app
.
You’ve now created the project directory, installed adm-zip
, and created a directory with files for archiving. In the next step, you’ll archive a directory using the adm-zip
module.
In this step, you’ll use adm-zip
to compress and archive the directory you created in the previous section.
To archive the directory, you’ll import the adm-zip
module and use the module’s addLocalFolder()
method to add the directory to the adm-zip
module’s ZIP object. Afterward, you’ll use the module’s writeZip()
method to save the archive in your local system.
Create and open a new file createArchive.js
in your preferred text editor. This tutorial uses nano
, a command-line text editor:
- nano createArchive.js
Next, require in the adm-zip
module in your createArchive.js
file:
const AdmZip = require("adm-zip");
The adm-zip
module provides a class that contains methods for creating ZIP archives.
Since it’s common to encounter large files during the archiving process, you might end up blocking the main thread until the ZIP archive is saved. To write non-blocking code, you’ll define an asynchronous function to create and save a ZIP archive.
In your createArchive.js
file, add the following highlighted code:
const AdmZip = require("adm-zip");
async function createZipArchive() {
const zip = new AdmZip();
const outputFile = "test.zip";
zip.addLocalFolder("./test");
zip.writeZip(outputFile);
console.log(`Created ${outputFile} successfully`);
}
createZipArchive();
createZipArchive
is an asynchronous function that creates a ZIP archive from a given directory. What makes it asynchronous is the async
keyword you defined before the function label. Within the function, you create an instance of the adm-zip
module, which provides methods you can use for reading and creating archives. When you create an instance, adm-zip
creates an in-memory ZIP where you can add files or directories.
Next, you define the archive name and store it in the outputDir
variable. To add the test
directory to the in-memory archive, you invoke the addLocalFolder()
method from adm-zip
with the directory path as an argument.
After the directory is added, you invoke the writeZip()
method from adm-zip
with a variable containing the name of the ZIP archive. The writeZip()
method saves the archive to your local disk.
Once that’s done, you invoke console.log()
to log that the ZIP file has been created successfully.
Finally, you call the createZipArchive()
function.
Before you run the file, wrap the code in a try…catch block to handle runtime errors:
const AdmZip = require("adm-zip");
async function createZipArchive() {
try {
const zip = new AdmZip();
const outputFile = "test.zip";
zip.addLocalFolder("./test");
zip.writeZip(outputFile);
console.log(`Created ${outputFile} successfully`);
} catch (e) {
console.log(`Something went wrong. ${e}`);
}
}
createZipArchive();
Within the try
block, the code will attempt to create a ZIP archive. If successful, the createZipArchive()
function will exit, skipping the catch
block. If creating a ZIP archive triggers an error, execution will skip to the catch
block and log the error in the console.
Save and exit the file in nano
with CTRL+X
. Enter y
to save the changes, and confirm the file by pressing ENTER
on Windows, or the RETURN
key on the Mac.
Run the createArchive.js
file using the node
command:
- node createArchive.js
You’ll receive the following output:
OutputCreated test.zip successfully
List the directory contents to see if the ZIP archive has been created:
- ls
You’ll receive the following output showing the archive among the contents:
OutputcreateArchive.js node_modules package-lock.json
package.json test test.zip
With the confirmation that the ZIP archive has been created, you’ll compare the ZIP archive, and the test
directory file size to see if the compression works.
Check the test
directory size using the du
command:
- du -h test
The -h
flag instructs du
to show the directory size in a human-readable format.
After running the command, you will receive the following output:
Output15M test
Next, check the test.zip
archive file size:
- du -h test.zip
The du
command logs the following output:
Output760K test.zip
As you can see, creating the ZIP file has dropped the directory size from 15 Megabytes(MB) to 760 Kilobytes(KB), which is a huge difference. The ZIP file is more portable and smaller in size.
Now that you created a ZIP archive, you’re ready to list the contents in a ZIP file.
In this step, you’ll read and list all files in a ZIP archive using adm-zip
. To do that, you’ll instantiate the adm-zip
module with your ZIP archive path. You’ll then call the module’s getEntries()
method which returns an array of objects. Each object holds important information about an item in the ZIP archive. To list the files, you’ll iterate over the array and access the filename from the object, and log it in the console.
Create and open readArchive.js
in your favorite text editor:
- nano readArchive.js
In your readArchive.js
, add the following code to read and list contents in a ZIP archive:
const AdmZip = require("adm-zip");
async function readZipArchive(filepath) {
try {
const zip = new AdmZip(filepath);
for (const zipEntry of zip.getEntries()) {
console.log(zipEntry.toString());
}
} catch (e) {
console.log(`Something went wrong. ${e}`);
}
}
readZipArchive("./test.zip");
First, you require in the adm-zip
module.
Next, you define the readZipArchive()
function, which is an asynchronous function. Within the function, you create an instance of adm-zip
with the path of the ZIP file you want to read. The file path is provided by the filepath
parameter. adm-zip
will read the file and parse it.
After reading the archive, you define a for....of
statement that iterates over objects in an array that the getEntries()
method from adm-zip
returns when invoked. On each iteration, the object is assigned to the zipEntry
variable. Inside the loop, you convert the object into a string that represents the object using the Node.js toString()
method, then log it in the console using the console.log()
method.
Finally, you invoke the readZipArchive()
function with the ZIP archive file path as an argument.
Save and exit your file, then run the file with the following command:
- node readArchive.js
You will get output that resembles the following(edited for brevity):
Output{
"entryName": "file1.txt",
"name": "file1.txt",
"comment": "",
"isDirectory": false,
"header": {
...
},
"compressedData": "<27547 bytes buffer>",
"data": "<null>"
}
...
The console will log four objects. The other objects have been edited out to keep the tutorial brief.
Each file in the archive is represented with an object similar to the one in the preceding output. To get the filename for each file, you need to access the name
property.
In your readArchive.js
file, add the following highlighted code to access each filename:
const AdmZip = require("adm-zip");
async function readZipArchive(filepath) {
try {
const zip = new AdmZip(filepath);
for (const zipEntry of zip.getEntries()) {
console.log(zipEntry.name);
}
} catch (e) {
console.log(`Something went wrong. ${e}`);
}
}
readZipArchive("./test.zip");
Save and exit your text editor. Now, run the file again with the node
command:
- node readArchive.js
Running the file results in the following output:
Outputfile1.txt
file2.txt
file3.txt
underwater.png
The output now logs the filename of each file in the ZIP archive.
You can now read and list each file in a ZIP archive. In the next section, you’ll add a file to an existing ZIP archive.
In this step, you’ll create a file and add it to the ZIP archive you created earlier without extracting it. First, you’ll read the ZIP archive by creating an adm-zip
instance. Second, you’ll invoke the module’s addFile()
method to add the file in the ZIP. Finally, you’ll save the ZIP archive in the local system.
Create another file file4.txt
with dummy content repeated 600,000 lines:
- yes "dummy content" | head -n 600000 > file4.txt
Create and open updateArchive.js
in your text editor:
- nano updateArchive.js
Require in the adm-zip
module and the fs
module that allows you to work with files in your updateArchive.js
file:
const AdmZip = require("adm-zip");
const fs = require("fs").promises;
You require in the promise-based version of the fs
module version, which allows you to write asynchronous code. When you invoke an fs
method, it will return a promise.
Next in your updateArchive.js
file, add the following highlighted code to add a new file to the ZIP archive:
const AdmZip = require("adm-zip");
const fs = require("fs").promises;
async function updateZipArchive(filepath) {
try {
const zip = new AdmZip(filepath);
content = await fs.readFile("./file4.txt");
zip.addFile("file4.txt", content);
zip.writeZip(filepath);
console.log(`Updated ${filepath} successfully`);
} catch (e) {
console.log(`Something went wrong. ${e}`);
}
}
updateZipArchive("./test.zip");
updateZipArchive
is an asynchronous function that reads a file in the filesystem and adds it to an existing ZIP. In the function, you create an instance of adm-zip
with the ZIP archive file path in the filepath
as a parameter. Next, you invoke the fs
module’s readFile()
method to read the file in the file system. The readFile()
method returns a promise, which you resolve with the await
keyword (await
is valid in only asynchronous functions). Once resolved, the method returns a buffer object, which contains the file contents.
Next, you invoke the addFile()
method from adm-zip
. The method takes two arguments. The first argument is the filename you want to add to the archive, and the second argument is the buffer object containing the contents of the file that the readFile()
method reads.
Afterwards, you invoke adm-zip
module’s writeZip()
method to save and write new changes in the ZIP archive. Once that’s done, you call the console.log()
method to log a success message.
Finally, you invoke the updateZipArchive()
function with the Zip archive file path as an argument.
Save and exit your file. Run the updateArchive.js
file with the following command:
- node updateArchive.js
You’ll see output like this:
OutputUpdated ./test.zip successfully
Now, confirm that the ZIP archive contains the new file. Run the readArchive.js
file to list the contents in the ZIP archive with the following command:
- node readArchive.js
You’ll receive the following output:
file1.txt
file2.txt
file3.txt
file4.txt
underwater.png
This confirms that the file has been added to the ZIP.
Now that you can add a file to an existing archive, you’ll extract the archive in the next section.
In this step, you’ll read and extract all contents in a ZIP archive into a directory. To extract a ZIP archive, you’ll instantiate adm-zip
with the archive file path. After that, you’ll invoke the module’s extractAllTo()
method with the directory name you want your extracted ZIP contents to reside.
Create and open extractArchive.js
in your text editor:
- nano extractArchive.js
Require in the adm-zip
module and the path
module in your extractArchive.js
file:
const AdmZip = require("adm-zip");
const path = require("path");
The path
module provides helpful methods for dealing with file paths.
Still in your extractArchive.js
file, add the following highlighted code to extract an archive:
const AdmZip = require("adm-zip");
const path = require("path");
async function extractArchive(filepath) {
try {
const zip = new AdmZip(filepath);
const outputDir = `${path.parse(filepath).name}_extracted`;
zip.extractAllTo(outputDir);
console.log(`Extracted to "${outputDir}" successfully`);
} catch (e) {
console.log(`Something went wrong. ${e}`);
}
}
extractArchive("./test.zip");
extractArchive()
is an asynchronous function that takes a parameter containing the file path of the ZIP archive. Within the function, you instantiate adm-zip
with the ZIP archive file path provided by the filepath
parameter.
Next, you define a template literal. Inside the template literal placeholder (${}
), you invoke the parse()
method from the path
module with the file path. The parse()
method returns an object. To get the name of the ZIP file without the file extension, you append the name
property to the object that the parse()
method returns. Once the archive name is returned, the template literal interpolates the value with the _extracted
string. The value is then stored in the outputDir
variable. This will be the name of the extracted directory.
Next, you invoke adm-zip
module’s extractAllTo
method with the directory name stored in the outputDir
to extract the contents in the directory. After that, you invoke console.log()
to log a success message.
Finally, you call the extractArchive()
function with the ZIP archive path.
Save your file and exit the editor, then run the extractArchive.js
file with the following command:
- node extractArchive.js
You receive the following output:
OutputExtracted to "test_extracted" successfully
Confirm that the directory containing the ZIP contents has been created:
- ls
You will receive the following output:
OutputcreateArchive.js file4.txt package-lock.json
readArchive.js test.zip updateArchive.js
extractArchive.js node_modules package.json
test test_extracted
Now, navigate into the directory containing the extracted contents:
- cd test_extracted
List the contents in the directory:
- ls
You will receive the following output:
Outputfile1.txt file2.txt file3.txt file4.txt underwater.png
You can now see that the directory has all the files that were in the original directory.
You’ve now extracted the ZIP archive contents into a directory.
In this tutorial, you created a ZIP archive, listed its contents, added a new file to the archive, and extracted all of its content into a directory using adm-zip
module. This will serve as a good foundation for working with ZIP archives in Node.js.
To learn more about adm-zip
module, view the adm-zip documentation. To continue building your Node.js knowledge, see How To Code in Node.js series
__dirname
is an environment variable that tells you the absolute path of the directory containing the currently executing file.
In this article, you will explore how to implement __dirname
in your Node.js project.
Deploy your Node applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To complete this tutorial, you will need:
This tutorial was verified with Node.js v17.2.0 and npm
v8.2.0.
This tutorial will use the following sample directory structure to explore how __dirname
works:
dirname-example
├──index.js
├──public
├──src
│ ├──helpers.js
│ └──api
│ └──controller.js
├──cronjobs
│ └──hello.js
└──package.json
You can start by creating a dirname-example
directory in your terminal:
- mkdir dirname-example
Navigate to the project directory:
- cd dirname-example
Initialize it as a Node.js project:
- npm init --yes
Now, you will create the directories and files to experiment with.
__dirname
You can use __dirname
to check on which directories your files live.
Create and edit controller.js
in the api
subdirectory in the src
directory:
console.log(__dirname) // "/Users/Sam/dirname-example/src/api"
console.log(process.cwd()) // "/Users/Sam/dirname-example"
Then, run the script:
- node src/api/controller.js
Create and edit hello.js
in the cronjobs
directory:
console.log(__dirname) // "/Users/Sam/dirname-example/cronjobs"
console.log(process.cwd()) // "/Users/Sam/dirname-example"
Then, run the script:
- node cronjobs/hello.js
Notice that __dirname
has a different value depending on which file you console it out. The process.cwd()
method also returns a value, but the project directory instead. The __dirname
variable always returns the absolute path of where your files live.
In this section, you will explore how to use __dirname
to make new directories, point to them, as well as add new files.
To create a new directory in your index.js
file, insert __dirname
as the first argument to path.join()
and the name of the new directory as the second:
const fs = require('fs');
const path = require('path');
const dirPath = path.join(__dirname, '/pictures');
fs.mkdirSync(dirPath);
Now you’ve created a new directory, pictures
, after calling on the mdirSync()
method, which contains __dirname
as the absolute path.
Another unique feature is its ability to point to directories. In your index.js
file, declare a variable and pass in the value of __dirname
as the first argument in path.join()
, and your directory containing static files as the second:
express.static(path.join(__dirname, '/public'));
Here, you’re telling Node.js to use __dirname
to point to the public
directory that contains static files.
You may also add files to an existing directory. In your index.js
file, declare a variable and include __dirname
as the first argument and the file you want to add as the second:
const fs = require('fs');
const path = require('path');
const filePath = path.join(__dirname, '/pictures');
fs.openSync(filePath, 'hello.jpeg');
Using the openSync()
method will add the file if it does not exist within your directory.
Node.js provides a way for you to make and point to directories. And add files to existing directories with a modular environment variable.
For further reading, check out the Node.js documentation for __dirname
, and the tutorial on using __dirname
in the Express.js framework.
cron
provides a way to repeat a task at a specific time interval. There may be repetitive tasks such as logging and performing backups that need to occur on a daily or weekly or monthly basis.
One method for implementing cron
on a Node.js server is by using the node-cron
module. This library uses the crontab
syntax, which may be familiar to users with previous experience with using cron
in Unix-like operating systems.
In this article, you will use node-cron
to periodically delete log files from the server. You will also be presented with two other common use cases - backing up a database and sending scheduled emails.
Simplify deploying cron jobs using DigitalOcean App Platform. Deploy directly from GitHub in minutes and manage your cron jobs with DigitalOcean’s App Platform Job Scheduler.
To follow through this tutorial, you’ll need:
This tutorial was verified with Node v17.2.0, npm
v8.1.4, node-cron
v2.0.3, shelljs
v0.8.4, and nodemailer
v6.7.2.
To get started, create a new Node application by opening your terminal and creating a new folder for your project:
- mkdir node-cron-example
Next, change into the new project directory:
- cd node-cron-example
Then initialize it, which creates a package.json
file which you will use to track dependencies:
- npm init --yes
Add the node-cron
module by running the following command:
- npm install node-cron@3.0.0
The node-cron
module is the task scheduler.
The project dependencies are installed. Let’s build the server.
Now, you can build the server and use node-cron
to schedule a task to run every minute.
Create a new cron-ping.js
file:
- nano cron-ping.js
Then, require node-cron
:
const cron = require('node-cron');
Next, add the following lines of code to cron-ping.js
:
// ...
// Schedule tasks to be run on the server.
cron.schedule('* * * * *', function() {
console.log('running a task every minute');
});
These asterisks are part of the crontab
syntax to represent different units of time:
* * * * * *
| | | | | |
| | | | | day of week
| | | | month
| | | day of month
| | hour
| minute
second ( optional )
A single asterisk behaves like a wildcard. Meaning the task will be run for every instance of that unit of time. Five asterisks ('* * * * *'
) represents the crontab
default of running every minute.
Numbers in the place of asterisks will be treated as values for that unit of time. Allowing you to schedule tasks to occur daily and weekly or in more complex.
Note: Learn more about how this notation works in How To Use Cron to Automate Tasks on a VPS.
Now, run the script:
- node cron-ping.js
After several minutes, you will get the following result:
Outputrunning a task every minute
running a task every minute
running a task every minute
...
You have an example task running every minute. You can stop the server with CTRL+C
(CONTROL+C
).
Now, let’s look at how to run tasks in more detail.
Consider a scenario where you need to routinely delete the log file from the server on the twenty-first day of every month. You can accomplish this with node-cron
.
Create an example log file named error.log
:
- nano error.log
Then, add a test message:
This is an example error message that in a log file that will be removed on the twenty-first day of the month.
Create a new cron-delete.js
file:
- nano cron-delete.js
This task will use fs
to unlink
a file. Add it to the top of the file:
const cron = require('node-cron');
const fs = require('fs');
Next, add the following lines of code:
// ...
// Remove the error.log file every twenty-first day of the month.
cron.schedule('0 0 21 * *', function() {
console.log('---------------------');
console.log('Running Cron Job');
fs.unlink('./error.log', err => {
if (err) throw err;
console.log('Error file successfully deleted');
});
});
Notice the pattern: 0 0 21 * *
.
0
and 0
(“00:00” - the start of the day).21
.Now, run the script:
- node cron-delete.js
On the twenty-first of the month, you will get the following output:
Output---------------------
Running Cron Job
Error file successfully deleted
You probably do not want to wait for the twenty-first of the month to verify the task has been executed. You can modify the scheduler to run in a shorter time interval - like every minute.
Check your server directory. The error.log
file will be deleted.
You can run any actions inside the scheduler. Actions ranging from creating a file to sending emails and running scripts. Let’s take a look at more use cases.
node-cron
to Back Up DatabasesEnsuring the preservation of user data is key to any business. If an unforeseen event happens and your database becomes corrupted or damaged, you will need to restore your database from a backup. You will be in serious trouble if you do not have any form of existing backup for your business.
Consider a scenario where you need to routinely back up a dump of the database at 11:59 PM every day. You can accomplish this with node-cron
.
Note: This use case entails setting up a local SQLite database. The finer details of the installation and creation of a database are not covered here. Feel free to substitute with another shell command.
Assume that you have SQLite installed and running on your environment. Given a database named database.sqlite
, your shell command for making a database backup may resemble this:
- sqlite3 database.sqlite .dump > data_dump.sql
This command takes the database, database.sqlite
, and runs the .dump
command, and outputs the result as a file named data_dump.sql
Next, install shelljs
, a Node module that will allow you to run the previous shell command:
- npm install shelljs@0.8.4
Create a new cron-dump.js
file:
- nano cron-dump.js
Then, require shelljs
:
const cron = require('node-cron');
const shell = require('shelljs');
Next, add the following lines of code:
// ...
// Backup a database at 11:59 PM every day.
cron.schedule('59 23 * * *', function() {
console.log('---------------------');
console.log('Running Cron Job');
if (shell.exec('sqlite3 database.sqlite .dump > data_dump.sql').code !== 0) {
shell.exit(1);
}
else {
shell.echo('Database backup complete');
}
});
Notice the pattern: 59 23 * * *
.
59
.23
(or 11 PM
in a 24-hour clock).This code will run the backup shell command. If it is successful, it will echo a message. Otherwise, if there is an error, it will exit.
Now, run the script:
- node cron-dump.js
At 11:59 PM, you will get the following output:
Output---------------------
Running Cron Job
Database backup complete
You probably do not want to wait for 11:59 PM to verify the task has been executed. You can modify the scheduler to run in a shorter time interval.
Check your server directory. A data_dump.sql
file will be present.
Next, let’s look at sending periodic emails.
node-cron
to Send Scheduled EmailsConsider a scenario where you curate a list of interesting links and then email them to subscribers every Wednesday. You can accomplish this with node-cron
.
Nodemailer supports test accounts provided by Ethereal Email. Create an Ethereal Account and use the username and password generated for you.
Warning: It is highly recommended that you do not use your personal email account for this step. Consider using a new separate test account to avoid any risk to your personal email account.
Next, install nodemailer
, a Node module that will allow you to send emails:
- npm install nodemailer@6.7.2
Create a new cron-mail.js
file:
- nano cron-mail.js
Then, require nodemailer
:
const cron = require('node-cron');>
const nodemailer = require('nodemailer');
Add a section that defines the mailer and sets the username and password for an email account:
// ...
// Create mail transporter.
let transporter = nodemailer.createTransport({
host: 'your_demo_email_smtp_host.example.com',
port: your_demo_email_port,
auth: {
user: 'your_demo_email_address@example.com',
pass: 'your_demo_email_password'
}
});
Warning: This step is presented for example purposes only. In a production environment, you would use environment variables to keep your password a secret. Do not save your password (credentials) in any code you are uploading to code repositories like GitHub.
Next, add the following lines of code:
// ...
// Sending emails every Wednesday.
cron.schedule('0 0 * * 3', function() {
console.log('---------------------');
console.log('Running Cron Job');
let messageOptions = {
from: 'your_demo_email_address@example.com',
to: 'your_demo_email_address@example.com',
subject: 'Scheduled Email',
text: 'Hi there. This email was automatically sent by us.'
};
transporter.sendMail(messageOptions, function(error, info) {
if (error) {
throw error;
} else {
console.log('Email successfully sent!');
}
});
});
Notice the pattern: 0 0 * * 3
.
0
and 0
(“00:00” - the start of the day).'3'
(Wednesday).This code will use the credentials provided to send an email to yourself. With the subject line: 'Scheduled Email'
and the body text: 'Hi there. This email was automatically sent by us.'
. Otherwise, if it fails, it will log an error.
Now, run the script:
- node cron-mail.js
On Wednesday, you will get the following output:
Output---------------------
Running Cron Job
Email successfully sent!
You probably do not want to wait for Wednesday to verify the task has been executed. You can modify the scheduler to run in a shorter time interval.
Open the Ethereal Email mailbox. There will be a new 'Scheduled Email'
in the inbox.
In this article, you learned how to use node-cron
to schedule jobs on a Node.js server. You were introduced to the broader concept of automating repetitive tasks in a consistent and predictable manner. This concept can be applied to your current and future projects.
There are other task scheduler tools available. Be sure to evaluate them to identify which tool is best suited for your particular project.
If you’d like to learn more about Node.js, check out our Node.js topic page for exercises and programming projects.
]]>The Node.js ecosystem provides a set of tools for interfacing with databases. One of those tools is node-postgres, which contains modules that allow Node.js to interface with the PostgreSQL database. Using node-postgres
, you will be able to write Node.js programs that can access and store data in a PostgreSQL database.
In this tutorial, you’ll use node-postgres
to connect and query the PostgreSQL (Postgres in short) database. First, you’ll create a database user and the database in Postgres. You will then connect your application to the Postgres database using the node-postgres
module. Afterwards, you will use node-postgres
to insert, retrieve, and modify data in the PostgreSQL database.
To complete this tutorial, you will need:
A non-root user account with sudo
privileges and a firewall enabled on Ubuntu 20.04. Follow our tutorial Initial Server Setup with Ubuntu 20.04 to setup your server.
Node.js installed on Ubuntu. If you don’t have Node.js installed, follow How To Install Node.js on Ubuntu 20.04.
PostgreSQL installed on your server. Follow the guide How To Install and Use PostgreSQL on Ubuntu 20.04 to install PostgreSQL on Ubuntu.
Basic knowledge of how to write queries in PostgreSQL, see An Introduction to Queries in PostgreSQL for more details.
Basics on how to write a Node.js program, see How To Write and Run Your First Program in Node.js.
Basic understanding of how to write asynchronous functions in JavaScript. Read through our Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript tutorial to learn the basics.
In this step, you will create the directory for the node application and install node-postgres
using npm
. This directory is where you will work on building your PostgreSQL database and configuration files to interact.
Create the directory for your project using the mkdir
command:
- mkdir node_pg_app
Navigate into the newly created directory using the cd
command:
- cd node_pg_app
Initialize the directory with a package.json
file using the npm init
command:
- npm init -y
The -y
flag creates a default package.json
file.
Next, install the node-postgres
module with npm install
:
- npm install pg
You’ve now set up the directory for your project and installed node-postgres
as a dependency. You’re now ready to create a user and a database in Postgres.
In this step, you’ll create a database user and the database for your application.
When you install Postgres on Ubuntu for the first time, it creates a user postgres
on your system, a database user named postgres
, and a database postgres
. The user postgres
allows you to open a PostgreSQL session where you can do administrative tasks such as creating users and databases.
PostgreSQL uses ident authentication connection scheme which allows a user on Ubuntu to login to the Postgres shell as long as the username is similar to the Postgres user. Since you already have a postgres
user on Ubuntu and a postgres
user in PostgreSQL created on your behalf, you’ll be able to log in to the Postgres shell.
To login, switch the Ubuntu user to postgres
with sudo
and login into the Postgres shell using the psql
command:
- sudo -u postgres psql
The command’s arguments represents:
-u
: a flag that switches the user to the given user on Ubuntu. Passing postgres
user as an argument will switch the user on Ubuntu to postgres
.psql
: a Postgres interactive terminal program where you can enter SQL commands to create databases, roles, tables, and many more.Once you login into the Postgres shell, your terminal will look like the following:
-
postgres
is the name of the database you’ll be interacting with and the #
denotes that you’re logged in as a superuser.
For the Node application, you’ll create a separate user and database that the application will use to connect to Postgres.
To do that, create a new role with a strong password:
- CREATE USER fish_user WITH PASSWORD 'password';
A role in Postgres can be considered as a user or group depending on your use case. In this tutorial, you’ll use it as a user.
Next, create a database and assign ownership to the user you created:
- CREATE DATABASE fish OWNER fish_user;
Assigning the database ownership to fish_user
grants the role privileges to create, drop, and insert data into the tables in the fish
database.
With the user and database created, exit out of the Postgres interactive shell:
- \q
To login into the Postgres shell as fish_user
, you need to create a user on Ubuntu with a name similar to the Postgres user you created.
Create a user with the adduser
command:
- sudo adduser fish_user
You have now created a user on Ubuntu, a PostgreSQL user, and a database for your Node application. Next, you’ll log in to the PostgreSQL interactive shell using the fish_user
and create a table.
In this section, you’ll open the Postgres shell with the user you created in the previous section on Ubuntu. Once you login into the shell, you’ll create a table for the Node.js app.
To open the shell as the fish_user
, enter the following command:
- sudo -u fish_user psql -d fish
sudo -u fish_user
switches your Ubuntu user to fish_user
and then runs the psql
command as that user. The -d
flag specifies the database you want to connect to, which is fish
in this case. If you don’t specify the database, psql
will try to connect to fish_user
database by default, which it won’t find and it will throw an error.
Once you’re logged in the psql
shell, your shell prompt will look like the following:
-
fish
denotes that you’re now connected to the fish
database.
You can verify the connection using the \conninfo
command:
- \conninfo
You will receive output similar to the following:
OutputYou are connected to database "fish" as user "fish_user" via socket in "/var/run/postgresql" at port "5432".
The output confirms that you have indeed logged in as a fish_user
and you’re connected to the fish
database.
Next, you’ll create a table that will contain the data your application will insert.
The table you’ll create will keep track of shark names and their colors. When populated with data, it will look like the following:
id | name | color |
---|---|---|
1 | sammy | blue |
2 | jose | teal |
Using the SQL create table
command, create a table:
- CREATE TABLE shark(
- id SERIAL PRIMARY KEY,
- name VARCHAR(50) NOT NULL,
- color VARCHAR(50) NOT NULL);
-
The CREATE TABLE shark
command creates a table with 3 columns:
id
: an auto-incrementing field and primary key for the table. Each time you insert a row, Postgres will increment and populate the id
value.
name
and color
: fields that can store 50 characters. NOT NULL
is a constraint that prevents the fields from being empty.
Verify if the table has been created with the right owner:
- \dt
The \dt
command list all tables in the database.
When you run the command, the output will resemble the following:
List of relations
Schema | Name | Type | Owner
--------+-------+-------+-----------
public | shark | table | fish_user
(1 row)
The output confirms that the fish_user
owns the shark
table.
Now exit out of the Postgres shell:
- \q
It will take you back to the project directory.
With the table created, you’ll use the node-postgres
module to connect to Postgres.
In this step, you’ll use node-postgres
to connect your Node.js application to the PostgreSQL database. To do that, you’ll use node-postgres
to create a connection pool. A connection pool functions as a cache for database connections allowing your app to reuse the connections for all the database requests. This can speed up your application and save your server resources.
Create and open a db.js
file in your preferred editor. In this tutorial, you’ll use nano
, a terminal text editor:
- nano db.js
In your db.js
file, require in the node-postgres
module and use destructuring assignment to extract a class Pool
from node-postgres
.
const { Pool } = require('pg')
Next, create a Pool
instance to create a connection pool:
const { Pool} = require('pg')
const pool = new Pool({
user: 'fish_user',
database: 'fish',
password: 'password',
port: 5432,
host: 'localhost',
})
When you create the Pool
instance, you pass a configuration object as an argument. This object contains the details node-postgres
will use to establish a connection to Postgres.
The object defines the following properties:
user
: the user you created in Postgres.database
: the name of the database you created in Postgres.password
: the password for the user fish_user
.port
: the port Postgres is listening on. 5432
is the default port.host
: the Postgres server you want node-postgres
to connect to. Passing it localhost
will connect the node-postgres
to the Postgres server installed on your system. If your Postgres server resided on another droplet, your host
would look like this:host: server_ip_address
.Note: In production, it’s recommended to keep the configuration values in a different file, such as the .env
file. This file is then added to the .gitignore
file if using Git to avoid tracking it with version control. The advantage is that it hides sensitive information, such as your password
, user
, and database
from attackers.
Once you create the instance, the database connection is established and the Pool
object is stored in the pool
variable. To use this anywhere in your app, you will need to export it. In your db.js
file, require in and define an instance of the Pool
object, and set its properties and values:
const { Pool } = require("pg");
const pool = new Pool({
user: "fish_user",
database: "fish",
password: "password",
port: 5432,
host: "localhost",
});
module.exports = { pool };
Save the file and exit nano
by pressing CTRL+X
. Enter y
to save the changes, and confirm your file name by pressing ENTER
or RETURN
key on Mac.
Now that you’ve connected your application to Postgres, you’ll use this connection to insert data in Postgres.
In this step, you’ll create a program that adds data into the PostgreSQL database using the connection pool you created in the db.js
file. To ensure that the program inserts different data each time it runs, you’ll give it functionality to accept command-line arguments. When running the program, you’ll pass it the name and color of the shark.
Create and open insertData.js
file in your editor:
- nano insertData.js
In your insertData.js
file, add the following code to make the script process command-line arguments:
const { pool } = require("./db");
async function insertData() {
const [name, color] = process.argv.slice(2);
console.log(name, color);
}
insertData();
First, you require in the pool
object from the db.js
file. This allows your program to use the database connection to query the database.
Next, you declare the insertData()
function as an asynchronous function with the async
keyword. This lets you use the await
keyword to make database requests asynchronous.
Within the insertData()
function, you use the process
module to access the command-line arguments. The Node.js process.argv
method returns all arguments in an array including the node
and insertData.js
arguments.
For example, when you run the script on the terminal with node insertData.js sammy blue
, the process.argv
method will return an array: ['node', 'insertData.js', 'sammy', 'blue']
(the array has been edited for brevity).
To skip the first two elements: node
and insertData.js
, you append JavaScript’s slice()
method to the process.argv
method. This returns elements starting from index 2 onwards. These arguments are then destructured into name
and color
variables.
Save your file and exit nano
with CTRL+X
. Run the file using node
and pass it the arguments sammy
, and blue
:
- node insertData.js sammy blue
After running the command, you will see the following output:
Outputsammy blue
The function can now access the name
and shark color
from the command-line arguments. Next, you’ll modify the insertData()
function to insert data into the shark
table.
Open the insertData.js
file in your text editor again and add the highlighted code:
const { pool } = require("./db");
async function insertData() {
const [name, color] = process.argv.slice(2);
const res = await pool.query(
"INSERT INTO shark (name, color) VALUES ($1, $2)",
[name, color]
);
console.log(`Added a shark with the name ${name}`);
}
insertData();
Now, the insertData()
function defines the name
and color
of the shark. Next, it awaits the pool.query
method from node-postgres
that takes an SQL statement INSERT INTO shark (name, color) ...
as the first argument. The SQL statement inserts a record into the shark
table. It uses what’s called a parameterized query. $1
, and $2
corresponds to the name
and color
variables in the array provided in the pool.query()
method as a second argument: [name, color]
. When Postgres is executing the statement, the variables are substituted safely protecting your application from SQL injection. After the query executes, the function logs a success message using console.log()
.
Before you run the script, wrap the code inside insertData()
function in a try...catch
block to handle runtime errors:
const { pool } = require("./db");
async function insertData() {
const [name, color] = process.argv.slice(2);
try {
const res = await pool.query(
"INSERT INTO shark (name, color) VALUES ($1, $2)",
[name, color]
);
console.log(`Added a shark with the name ${name}`);
} catch (error) {
console.error(error)
}
}
insertData()
When the function runs, the code inside the try
block executes. If successful, the function will skip the catch
block and exit. However, if an error is triggered inside the try
block, the catch
block will execute and log the error in the console.
Your program can now take command-line arguments and use them to insert a record into the shark
table.
Save and exit out of your text editor. Run the insertData.js
file with sammy
and blue
as command-line arguments:
- node insertData.js sammy blue
You’ll receive the following output:
OutputAdded a shark with the name sammy
Running the command insert’s a record in the shark table with the name sammy
and the color blue
.
Next, execute the file again with jose
and teal
as command-line arguments:
- node insertData.js jose teal
Your output will look similar to the following:
OutputAdded a shark with the name jose
This confirms you inserted another record into the shark
table with the name jose
and the color teal
.
You’ve now inserted two records in the shark
table. In the next step, you’ll retrieve the data from the database.
In this step, you’ll retrieve all records in the shark
table using node-postgres
, and log them into the console.
Create and open a file retrieveData.js
in your favorite editor:
- nano retrieveData.js
In your retrieveData.js
, add the following code to retrieve data from the database:
const { pool } = require("./db");
async function retrieveData() {
try {
const res = await pool.query("SELECT * FROM shark");
console.log(res.rows);
} catch (error) {
console.error(error);
}
}
retrieveData()
The retrieveData()
function reads all rows in the shark
table and logs them in the console. Within the function try
block, you invoke the pool.query()
method from node-postgres
with an SQL statement as an argument. The SQL statement SELECT * FROM shark
retrieves all records in the shark
table. Once they’re retrieved, the console.log()
statement logs the rows.
If an error is triggered, execution will skip to the catch
block, and log the error. In the last line, you invoke the retrieveData()
function.
Next, save and close your editor. Run the retrieveData.js
file:
- node retrieveData.js
You will see output similar to this:
Output[
{ id: 1, name: 'sammy', color: 'blue' },
{ id: 2, name: 'jose', color: 'teal' }
]
node-postgres
returns the table rows in a JSON-like object. These objects are stored in an array.
You can now retrieve data from the database. You’ll now modify data in the table using node-postgres
.
In this step, you’ll use node-postgres
to modify data in the Postgres database. This will allow you to change the data in any of the shark
table records.
You’ll create a script that takes two command-line arguments: id
and name
. You will use the id
value to select the record you want in the table. The name
argument will be the new value for the record whose name you want to change.
Create and open the modifyData.js
file:
- nano modifyData.js
In your modifyData.js
file, add the following code to modify a record in the shark
table:
const { pool } = require("./db");
async function modifyData() {
const [id, name] = process.argv.slice(2);
try {
const res = await pool.query("UPDATE shark SET name = $1 WHERE id = $2", [
name,
id,
]);
console.log(`Updated the shark name to ${name}`);
} catch (error) {
console.error(error);
}
}
modifyData();
First, you require the pool
object from the db.js
file in your modifyData.js
file.
Next, you define an asynchronous function modifyData()
to modify a record in Postgres. Inside the function, you define two variables id
and name
from the command-line arguments using the destructuring assignment.
Within the try
block, you invoke the pool.query
method from node-postgres
by passing it an SQL statement as the first argument. On the UPDATE
SQL statement, the WHERE
clause selects the record that matches the id
value. Once selected, SET name = $1
changes the value in the name field to the new value.
Next, console.log
logs a message that executes once the record name has been changed. Finally, you call the modifyData()
function on the last line.
Save and exit out of the file using CTRL+X
. Run the modifyData.js
file with 2
and san
as the arguments:
- node modifyData.js 2 san
You will receive the following output:
OutputUpdated the shark name to san
To confirm that the record name has been changed from jose
to san
, run the retrieveData.js
file:
- node retrieveData.js
You will get output similar to the following:
Outputoutput
[
{ id: 1, name: 'sammy', color: 'blue' },
{ id: 2, name: 'san', color: 'teal' }
]
You should now see that the record with the id 2
now has a new name san
replacing jose
.
With that done, you’ve now successfully updated a record in the database using node-postgres
.
In this tutorial, you used node-postgres
to connect and query a Postgres database. You began by creating a user and database in Postgres. You then created a table, connected your application to Postgres using node-postgres
, and inserted, retrieved, and modified data in Postgres using the node-postgres
module.
For more information about node-postgres
, visit their documentation. To improve your Node.js skills, you can explore the How To Code in Node.js series.
In Node.js, you need to restart the process to make changes take effect. This adds an extra step to your workflow. You can eliminate this extra step by using nodemon
to restart the process automatically.
nodemon
is a command-line interface (CLI) utility developed by @rem that wraps your Node app, watches the file system, and automatically restarts the process.
In this article, you will learn about installing, setting up, and configuring nodemon
.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
If you would like to follow along with this article, you will need:
This tutorial was verified with Node.js v17.1.0, npm v8.1.2, nodemon
v2.0.15, and express
v4.17.1.
nodemon
First, you will need to install nodemon
on your machine. Install the utility either globally or locally on your project using npm
or yarn
:
You can install nodemon
globally with npm
:
- npm install nodemon --global
Or with yarn
:
- yarn global add nodemon
You can also install nodemon
locally. When performing a local installation, you can install nodemon
as a dev dependency with --save-dev
(or --dev
).
Install nodemon
locally with npm
:
- npm install nodemon --save-dev
Or with yarn
:
- yarn add nodemon --dev
One thing to be aware of with a local install is that you will not be able to use the nodemon
command directly:
- Outputcommand not found: nodemon
You can execute the locally installed package:
- ./node_modules/nodemon/bin/nodemon.js [your node app]
You can also use it in npm scripts or with npx.
This concludes the nodemon
installation process.
nodemon
You can use nodemon
to start a Node script. For example, if you have an Express server setup in a server.js
file, you can start nodemon
and watch for changes like this:
- nodemon server.js
You can pass in arguments the same way as if you were running the script with Node:
- nodemon server.js 3006
Every time you make a change to a file with one of the default watched extensions (.js
, .mjs
, .json
, .coffee
, or .litcoffee
) in the current directory or a subdirectory, the process will restart.
Let’s write an example server.js
file that outputs the message: Dolphin app listening on port ${port}!
.
const express = require('express')
const app = express()
const port = 3000
app.listen(port, ()=> console.log(`Dolphin app listening on port ${port}!`))
Run the example with nodemon
:
- nodemon server.js
The terminal output will display:
Output[nodemon] 2.0.15
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
While nodemon
is still running, let’s make a change to the server.js
file. Change the output a different message: Shark app listening on port ${port}!
.
The terminal output will display:
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
The terminal output from the Node.js app is displaying the new changes.
You can restart the process at any time by typing rs
and hitting ENTER
.
Alternatively, nodemon
will also look for a main
file specified in your project’s package.json
file:
{
// ...
"main": "server.js",
// ...
}
If a main
file is not specified, nodemon
will search for a start
script:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
Once you make the changes to package.json
, you can then call nodemon
to start the example app in watch mode without having to pass in server.js
.
You can modify the configuration settings available to nodemon
.
Let’s go over some of the main options:
--exec
: Use the --exec
switch to specify a binary to execute the file with. For example, when combined with the ts-node
binary, --exec
can become useful to watch for changes and run TypeScript files.--ext
: Specify different file extensions to watch. For this switch, provide a comma-separated list of file extensions (e.g., --ext js,ts
).--delay
: By default, nodemon
waits for one second to restart the process when a file changes, but with the --delay
switch, you can specify a different delay. For example, nodemon --delay 3.2
for a 3.2-second delay.--watch
: Use the --watch
switch to specify multiple directories or files to watch. Add one --watch
switch for each directory you want to watch. By default, the current directory and its subdirectories are watched, so with --watch
you can narrow that to only specific subdirectories or files.--ignore
: Use the --ignore
switch to ignore certain files, file patterns, or directories.--verbose
: A more verbose output with information about what file(s) changed to trigger a restart.You can view all the available options with the following command:
- nodemon --help
Using these options, let’s create the command to satisfy the following scenario:
server
directory.ts
extension.test.ts
suffixserver/server.ts
) with ts-node
- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
The terminal output will display:
Output[nodemon] 2.0.15
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): server
[nodemon] watching extensions: ts
[nodemon] starting `ts-node server/server.ts`
This command combines --watch
, --ext
, --exec
, --ignore
, and --delay
options to satisfy the conditions for our scenario.
In the previous example, adding configuration switches when running nodemon
can get tedious. A better solution for projects that require complicated configurations is to define these options in a nodemon.json
file.
For example, here are the same configurations as the previous command line example, but placed in a nodemon.json
file:
{
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Note the use of execMap
instead of the --exec
switch. execMap
allows you to specify binaries for certain file extensions.
Alternatively, if you would rather not add a nodemon.json
config file to your project, you can add these configurations to the package.json
file under a nodemonConfig
key:
{
"name": "nodemon-example",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Once you make the changes to either nodemon.json
or package.json
, you can then start nodemon
with the desired script:
- nodemon server/server.ts
nodemon
will pick up the configurations and use them. This way, your configurations can be saved, shared, and repeated to avoid copy-and-pasting or typing errors in the command line.
In this article, you explored how to use nodemon
with your Node.js applications. This tool helps automate the process of stopping and starting a Node server to view the changes.
For more information about the available features and troubleshooting errors, consult the official documentation.
If you’d like to learn more about Node.js, check out our Node.js topic page for exercises and programming projects.
]]>In this tutorial, you will use Node.js, telegraf
, jimp
, and the Pexels API to build a Telegram chatbot that will send you a randomly selected image with a fact overlayed. A Telegram bot is a bot you can interact with using custom slash commands through your preferred Telegram client. You will create the bot through Telegram, and define its logic to select a random animal image and a fact on the animal using JavaScript.
At the end of this tutorial you will have a Telegram chatbot that looks like the following:
Once you’ve completed your bot, you will receive a fact about an animal whenever you send a custom Telegram slash command.
In order to follow this tutorial the reader will need the following tools:
This tutorial was verified with Node v12.18.2 and npm v6.14.8.
In this section, you will create the directory where you will build the chatbot, create a Node project, and install the required dependencies.
Open a terminal window and create a new directory called facts-bot
:
- mkdir facts-bot
Navigate into the directory:
- cd facts-bot
Create a directory named temp
:
- mkdir temp
With the command above, you created a directory named temp
. In this directory, you will temporarily store the images that your bot will send to the user.
Now, you’ll create a new Node.js project. Running npm’s init
command will create a package.json
file, which will manage your dependencies and metadata.
Run the initialization command:
- npm init
To accept the default values, press ENTER
to all prompts. Alternately, you can personalize your responses. To do this, review npm’s initialization settings in Step 1 of the tutorial How To Use Node.js Modules with npm and package.json.
Open the package.json file and edit it:
- nano package.json
Now, you’ll update the properties in your package.json
file. Replace the contents inside the file with the highlighted code:
{
"name": "facts-bot",
"version": "1.0.0",
"description": "",
"main": "main.js",
"scripts": {
"start": "nodemon main.js"
},
"author": "",
"license": "ISC"
}
Here you changed the main
and scripts
properties. By changing the main
property, you have set the application main file to main.js
. This will inform Node the main.js
file is the primary entry point to your program. In the scripts
property you have have added a script
named start
, which allows you to set the command that is supposed to run when you start the application. Once you call the script
the command nodemon
will run the main.js
file you will create in the next step.
With your settings now defined in your package.json
file, you will now create a file that will store your environment variables. In your terminal, create a file named .env
:
touch .env
In your .env
file, you will store your Telegram bot token and Pexels API key. A Telegram Bot token allows you to interact with your Telegram bot. The Pexels API key allows you to interact with the Pexels API. You will store your environment variables in a later step.
This time, you’ll use npm
to install the dependencies telegraf
, dotenv
, pexels
, jimp
, and uuid
. You’ll also use the --save
flag to save the dependencies. In your terminal, run the following command:
- npm install telegraf dotenv pexels jimp uuid --save
In this command, you have installed:
telegraf
: a library that helps you develop your own Telegram bots using JavaScript or TypeScript. You are going to use it to build your bot.dotenv
: a zero-dependency module that loads environment variables from a .env
file into process.env
. You are going to use this module to retrieve the bot token and Pexels API key from the .env
file you created.pexels
: a convenient wrapper around the Pexels API that can be used both on the server in Node.js and the browser. You are going to use this module to retrieve animal images from Pexels.jimp
: an image processing library written entirely in JavaScript for Node, with zero external or native dependencies. You are going to use this library to edit images retrieved from Pexels and insert a fact about an animal in them.uuid
: a module that allows you to generate RFC-compliant UUIDs in JavaScript. You are going to use this module to create a unique name for the image retrieved from Pexels.Now, install nodemon
as a dev dependency:
npm install nodemon --save-dev
nodemon
is a tool that develops Node.js based applications by automatically restarting the Node application when it detects file changes in the directory. You will use this module to start and keep your app running as you test your bot.
Note: At the time of writing these are the versions of the modules that are being used:telegraf
: 4.3.0 ; dotenv
: 8.2.0; pexels
: 1.2.1 ;jimp
: 0.16.1 ; uuid
: 8.3.2; nodemon
: 2.0.12.
In this step, you created a project directory and initialized a Node.js project for your bot. You also installed the modules needed to build the bot. In the next step, you will register a bot in Telegram and retrieve an API key for the Pexels API.
In this section, you will first register a bot with BotFather, then retrieve an API key for the Pexels API. BotFather is a chatbot managed by Telegram that allows users to create and manage chatbots.
Open your preferred Telegram client, search for @BotFather
, and start the chat. Send the /newbot
slash command and follow the instructions sent by the BotFather:
After choosing your bot name and username you will receive a message containing your bot access token:
Copy the bot token, and open your .env
file:
- nano .env
Save the Bot token in a variable named BOT_TOKEN
:
BOT_TOKEN = "Your bot token"
Now that you have saved your bot token in the .env
file, it’s time to retrieve the Pexels API key.
Navigate to Pexels, and log in to your Pexels account. Click on the Image & Video API tab and create a new API key:
Copy the API key, and open your .env
file:
- nano .env
Save the API key in a variable named PEXELS_API_KEY
. Your .env
should look like the following:
BOT_TOKEN = "Your_bot_token"
PEXELS_API_KEY = "Your_Pexels_API_key"
In this section, You have registered your bot, retrieved your Pexels API key, and saved your bot token and Pexels API key in your .env
file. In the next section, you are going to create the file responsible for running the bot.
main.js
FileIn this section, you will create and build out your bot. You will create a file with the label main.js
, and this will contain your bot’s logic.
In the root directory of your project, create and open the main.js
file using your preferred text editor:
- nano main.js
Within the main.js
file, add the following code to import the libraries you’ll use:
const { Telegraf } = require('telegraf')
const { v4: uuidV4 } = require('uuid')
require('dotenv').config()
let factGenerator = require('./factGenerator')
In this code block, you have required in the telegraf
, the uuid
, the dotenv
module, and a file named factGenerator.js
. You are going to use the telegraf
module to start and manage the bot, the uuid
module to generate a unique file name for the image, and the dotenv
module to get your Telegram bot token and Pexels API key stored in the .env
file. The factGenerator.js
file will be used to retrieve a random animal image from Pexels, insert a fact about the animal, and delete the image after it’s sent to the user. You will create this file in the next section.
Below the require
statements, add the following code to create an instance of the bot:
. . .
const bot = new Telegraf(process.env.BOT_TOKEN)
bot.start((ctx) => {
let message = ` Please use the /fact command to receive a new fact`
ctx.reply(message)
})
Here, you retrieved and used the BOT_TOKEN
that BotFather sent, created a new bot instance, and assigned it to a variable called bot
. After creating a new bot instance, you added a command listener for the /start
command. This command is responsible for initiating a conversation between a user and the bot. Once a user sends a message containing /start
the bot replies with a message asking the user to use the /fact
command to receive a new fact.
You have now created the command handler responsible for starting the interaction with your chatbot. Now, let’s create the command handler for generating a fact. Below the .start()
command, add the following code:
. . .
bot.command('fact', async (ctx) => {
try {
ctx.reply('Generating image, Please wait !!!')
let imagePath = `./temp/${uuidV4()}.jpg`
await factGenerator.generateImage(imagePath)
await ctx.replyWithPhoto({ source: imagePath })
factGenerator.deleteImage(imagePath)
} catch (error) {
console.log('error', error)
ctx.reply('error sending image')
}
})
bot.launch()
In this code block, you created a command listener for the custom /fact
slash command. Once this command is triggered from the Telegram user interface, the bot sends a message to the user. The uuid
module is used to generate the image name and path. The image will be stored in the /temp
directory that you created in Step 1. Afterwards, the image path is passed to a method named generateImage()
you’ll define in the factGenerator.js
file to generate an image containing a fact about an animal. Once the image is generated, the image is sent to the user. Then, the image path is passed to a method named deleteFile
in the factGenerator.js
file to delete the image. Lastly, you launched your bot by calling the bot.launch()
method.
The main.js
file will look like the following:
const { Telegraf } = require('telegraf')
const { v4: uuidV4 } = require('uuid')
require('dotenv').config()
let factGenerator = require('./factGenerator')
const bot = new Telegraf(process.env.BOT_TOKEN)
bot.start((ctx) => {
let message = ` Please use the /fact command to receive a new fact`
ctx.reply(message)
})
bot.command('fact', async (ctx) => {
try {
ctx.reply('Generating image, Please wait !!!')
let imagePath = `./temp/${uuidV4()}.jpg`
await factGenerator.generateImage(imagePath)
await ctx.replyWithPhoto({ source: imagePath })
factGenerator.deleteImage(imagePath)
} catch (error) {
console.log('error', error)
ctx.reply('error sending image')
}
});
bot.launch()
You have created the file responsible for running and managing your bot. You will now set facts for the animal and build out the bot’s logic in the factGenerator.js
file.
In this section, you will create files named fact.js
and factGenerator.js
. fact.js
will store facts about animals in one data source. The factGenerator.js
file will contain the code needed to retrieve a random fact about an animal from a file, retrieve an image from Pexels, use jimp
to write the fact in the retrieved image, and delete the image.
In the root directory of your project, create and open the facts.js
file using your preferred text editor:
- nano facts.js
Within the facts.js
file add the following code to create your data source:
const facts = [
{
fact: "Mother pandas keep contact with their cub nearly 100% of the time during their first month - with the cub resting on her front and remaining covered by her paw, arm or head.",
animal: "Panda"
},
{
fact: "The elephant's temporal lobe (the area of the brain associated with memory) is larger and denser than that of people - hence the saying 'elephants never forget'.",
animal: "Elephant"
},
{
fact: "On average, males weigh 190kg and females weigh 126kg . They need this weight and power behind them to hunt large prey and defend their pride. ",
animal: "Lion"
},
{
fact: "The Amazon river is home to four species of river dolphin that are found nowhere else on Earth. ",
animal: "Dolphin"
},
]
module.exports = { facts }
In this code block, you defined an object with an array containing facts about animals and stored in a variable named facts
. Each object has the following properties: fact
and animal
. In the property named fact
, its value is a fact about an animal, while the property animal
stores the name of the animal. Lastly, you are exporting the facts
array.
Now, create a file named factGenerator.js
:
- nano factGenerator.js
Inside the factGenerator.js
file, add the following code to require in the dependencies you’ll use to build out the logic to make your animal image:
let { createClient } = require('pexels')
let Jimp = require('jimp')
const fs = require('fs')
let { facts } = require('./facts')
Here, you required in the pexels
, the jimp
, the fs
module, and your facts.js
file. You will use the pexels
module to retrieve animal images from Pexels, the jimp
module to edit the image retrieved from Pexels, and the fs
module to delete the image from your file directory after it’s sent to the user.
Below the require
statements, add the following code to generate an image:
. . .
async function generateImage(imagePath) {
let fact = randomFact()
let photo = await getRandomImage(fact.animal)
await editImage(photo, imagePath, fact.fact)
}
In this code block, you created a function named generateImage()
. This function takes as an argument the path of the Pexel image in your file directory. Once this function is called a function named randomFact()
is called and the value returned is stored in a variable named fact
. The randomFact()
function randomly selects an object in the facts.js
file. After receiving the object, its property animal
is passed to a function named getRandomImage()
. The getRandomImage()
function will use the pexels
module to search for images containing the name of the animal passed, and selects a random image. The value returned is stored in a variable named photo
. Finally, the photo
, imagePath
, and the fact
property from the facts.js
file are passed to a function named editImage()
. The editImage()
function uses the jimp
module to insert the random fact in the random image and then save the edited image in the imagePath
.
Here, you have created the function that is called when you send the /fact
slash command to the bot. Now you’ll create the functions getRandomImage()
and editImage()
and construct the logic behind selecting and editing a random image.
Below the generateImage()
function, add the following code to set the randomization logic:
. . .
function randomFact() {
let fact = facts[randomInteger(0, (facts.length - 1))]
return fact
}
function randomInteger(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
You have now created the functions randomFact()
and randomInteger()
. The randomFact()
function selects a random fact
in the facts.js
file by calling the randomInteger()
function, and returns this object. The randomInteger()
function returns a random integer in the interval of 0 and the number of facts in the facts.js
file.
Now that you’ve defined functions to return a random fact and random integer, you’ll need to create a function to get a random image from Pexels. Below the randomInteger()
function, add the following code to get a random image:
. . .
async function getRandomImage(animal) {
try {
const client = createClient(process.env.PEXELS_API_KEY)
const query = animal
let image
await client.photos.search({ query, per_page: 10 }).then(res => {
let images = res.photos
image = images[randomInteger(0, (images.length - 1))]
})
return image
} catch (error) {
console.log('error downloading image', error)
getRandomImage(animal)
}
}
In this code block, you have created a function named getRandomImage()
. This function takes as an argument an animal name. When this function is called a client
object is created by using createClient()
method object from the pexels
module and the Pexels API key stored in the .env
file. The animal name is stored in a variable called query
, then the client
object is used to search for images containing the value in the query
. Once the images are found, a random image is selected with the help of the randomInteger()
function. Finally, the random image is returned to the generateImage()
method in your main.js
file.
With your getRandomImage()
function in place, the image selected needs to have a text overlay before it’s sent to your Telegram bot. Below the getRandomImage()
function, add the following code to set the overlay:
. . .
async function editImage(image, imagePath, fact) {
try {
let imgURL = image.src.medium
let animalImage = await Jimp.read(imgURL).catch(error => console.log('error ', error))
let animalImageWidth = animalImage.bitmap.width
let animalImageHeight = animalImage.bitmap.height
let imgDarkener = await new Jimp(animalImageWidth, animalImageHeight, '#000000')
imgDarkener = await imgDarkener.opacity(0.5)
animalImage = await animalImage.composite(imgDarkener, 0, 0);
} catch (error) {
console.log("error editing image", error)
}
}
Here, you created a function named editImage()
. This function takes as arguments a random animal labeled image
, the imagePath, and a fact
about this random animal. In the variable imgURL
, the URL for the medium size of the image is retrieved from the Pexels API. Afterwards the read()
method of jimp
is used to load the image. Once the image is loaded and stored in a variable named animalImage
, the image width and height are retrieved and stored in the variables animalImageWidth
and animalImageHeight
respectively. The variable imgDarkener
stores new instance of Jimp()
and darkens the image. The opacity()
method of jimp
is used to set imgDarkener
’s opacity to 50%. Finally, the composite()
method of jimp
is used to put the contents in imgDarkener
over the image in animalImage
. This in return makes the image in animalImage
darker before adding the text stored in the fact
variable, and make the text visible over the image.
Note: Jimp by default provides a method named color()
that allows you to adjust an image’s tonal levels. For the purpose of this tutorial, you’ll write a custom tonal adjuster as the color()
method does not offer the precision necessary here.
At the bottom of the try
block inside the editImage()
function, add the following code:
. . .
async function editImage(image, imagePath,fact) {
try {
. . .
let posX = animalImageWidth / 15
let posY = animalImageHeight / 15
let maxWidth = animalImageWidth - (posX * 2)
let maxHeight = animalImageHeight - posY
let font = await Jimp.loadFont(Jimp.FONT_SANS_16_WHITE)
await animalImage.print(font, posX, posY, {
text: fact,
alignmentX: Jimp.HORIZONTAL_ALIGN_CENTER,
alignmentY: Jimp.VERTICAL_ALIGN_MIDDLE
}, maxWidth, maxHeight)
await animalImage.writeAsync(imagePath)
console.log("Image generated successfully")
} catch (error) {
. . .
}
}
In this code block, you used the animalImageWidth
, and animalImageHeight
to get the values that will be used to center the text in the animalImage
. After, you used the loadFont()
method of jimp
to load the font, and store the font in a variable named font
. The font color is white, the type is sans-serif (SANS
), and the size is 16. Finally, you used the print()
method of jimp
to insert the fact
in the animalImage
, and the write()
method to save the animalImage
in the imagePath
.
Now that you’ve created the function responsible for editing the image, you’ll need a function to delete the image from your file structure after it is sent to the user. Below your editImage()
function, add the following code:
. . .
const deleteImage = (imagePath) => {
fs.unlink(imagePath, (err) => {
if (err) {
return
}
console.log('file deleted')
})
}
module.exports = { generateImage, deleteImage }
Here, you have created a function named deleteImage()
. This function takes as an argument the variable imagePath
. Once this function is called, the fs
module is used to delete the image stored in the variable imagePath
. Lastly, you exported the generateImage()
function and the deleteImage()
function.
With your functions in place, the factGenerator.js
file will look like the following:
let { createClient } = require('pexels')
let Jimp = require('jimp')
const fs = require('fs')
let { facts } = require('./facts')
async function generateImage(imagePath) {
let fact = randomFact()
let photo = await getRandomImage(fact.animal)
await editImage(photo, imagePath, fact.fact)
}
function randomFact() {
let fact = facts[randomInteger(0, (facts.length - 1))]
return fact
}
function randomInteger(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
async function getRandomImage(animal) {
try {
const client = createClient(process.env.PEXELS_API_KEY)
const query = animal
let image
await client.photos.search({ query, per_page: 10 }).then(res => {
let images = res.photos
image = images[randomInteger(0, (images.length - 1))]
})
return image
} catch (error) {
console.log('error downloading image', error)
getRandomImage(animal)
}
}
async function editImage(image, imagePath, fact) {
try {
let imgURL = image.src.medium
let animalImage = await Jimp.read(imgURL).catch(error => console.log('error ', error))
let animalImageWidth = animalImage.bitmap.width
let animalImageHeight = animalImage.bitmap.height
let imgDarkener = await new Jimp(animalImageWidth, animalImageHeight, '#000000')
imgDarkener = await imgDarkener.opacity(0.5)
animalImage = await animalImage.composite(imgDarkener, 0, 0);
let posX = animalImageWidth / 15
let posY = animalImageHeight / 15
let maxWidth = animalImageWidth - (posX * 2)
let maxHeight = animalImageHeight - posY
let font = await Jimp.loadFont(Jimp.FONT_SANS_16_WHITE)
await animalImage.print(font, posX, posY, {
text: fact,
alignmentX: Jimp.HORIZONTAL_ALIGN_CENTER,
alignmentY: Jimp.VERTICAL_ALIGN_MIDDLE
}, maxWidth, maxHeight)
await animalImage.writeAsync(imagePath)
console.log("Image generated successfully")
} catch (error) {
console.log("error editing image", error)
}
}
const deleteImage = (imagePath) => {
fs.unlink(imagePath, (err) => {
if (err) {
return
}
console.log('file deleted')
})
}
module.exports = { generateImage, deleteImage }
Save your factGenerator.js
file. Return to your terminal, and run the following command to start your bot:
- npm start
Open your preferred Telegram client, and search for your bot. Send a message with the /start
command to initiate the conversation, or click the Start button. Then, send a message with the /fact
command to receive your image.
You will receive an image similar to the following:
You now see the image in your preferred Telegram client with a fact imposed over the image. You’ve created the file and functions responsible for retrieving a random fact from the facts.js
file, retrieving an animal image from Pexels, and inserting a fact onto the image.
In this tutorial, you built a Telegram chatbot that sends an image of an animal with a fact overlayed through a custom slash command. You created the command handlers for the bot through the telegraf
module. You also created functions responsible for retrieving a random fact, random images from Pexels using the pexels
module, and inserting a fact over the random image using the jimp
module. For more information about the Pexels API, telegraf
and jimp
modules please refer to documentation on the Pexels API, telegraf
, jimp
.
Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, MacOS, FreeBSD, and Windows. Node.js applications can be run at the command line, but we’ll focus on running them as a service, so that they will automatically restart on reboot or failure, and can safely be used in a production environment.
In this tutorial, we will cover setting up a production-ready Node.js environment on a single Ubuntu 16.04 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS, using a free certificate provided by Let’s Encrypt.
This guide assumes that you have the following:
sudo
privileges, as described in the initial server setup guide for Ubuntu 16.04.When you’ve completed the prerequisites you will have a server serving the default Nginx placeholder page at https://example.com/.
Let’s get started by installing the Node.js runtime on your server.
We will install the latest LTS release of Node.js, using the NodeSource package archives.
First, you need to install the NodeSource PPA in order to get access to its contents. Make sure you’re in your home directory, and use curl
to retrieve the installation script for the Node.js 16.x archives:
- cd ~
- curl -sL https://deb.nodesource.com/setup_16.x -o nodesource_setup.sh
You can inspect the contents of this script with nano
(or your preferred text editor):
- nano nodesource_setup.sh
Then run the script under sudo
:
- sudo bash nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from nodesource, you can install the Node.js package in the same way that you did above:
- sudo apt-get install nodejs
The nodejs
package contains the node
binary as well as npm
, so you don’t need to install npm
separately. However, in order for some npm
packages to work (such as those that require compiling code from source), you will need to install the build-essential
package:
- sudo apt-get install build-essential
The Node.js runtime is now installed, and ready to run an application. Let’s write a Node.js application.
We will write a Hello World application that returns “Hello World” to any HTTP requests. This is a sample application that will help you get your Node.js set up, which you can replace with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.
First, create and open your Node.js application for editing. For this tutorial, we will use nano
to edit a sample application called hello.js
:
- cd ~
- nano hello.js
Insert the following code into the file. If you want to, you may replace the highlighted port, 8080
, in both locations (be sure to use a non-admin port, i.e. 1024 or greater):
#!/usr/bin/env nodejs
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'localhost');
console.log('Server running at http://localhost:8080/');
Now save and exit.
This Node.js application listens on the specified address (localhost
) and port (8080
), and returns “Hello World” with a 200
HTTP success code. Since we’re listening on localhost, remote clients won’t be able to connect to our application.
In order to test your application, set hello.js
to be executable using chmod
:
- chmod +x ./hello.js
Then run it like so:
- ./hello.js
OutputServer running at http://localhost:8080/
Note: Running a Node.js application in this manner will block additional commands until the application is killed by pressing Ctrl-C.
In order to test the application, open another terminal session on your server, and connect to localhost with curl
:
- curl http://localhost:8080
If you see the following output, the application is working properly and listening on the proper address and port:
OutputHello World
If you do not see the proper output, make sure that your Node.js application is running, and configured to listen on the proper address and port.
Once you’re sure it’s working, switch back to your other terminal and kill the Node.js application (if you haven’t already) by pressing Ctrl+C.
Now we will install PM2, which is a process manager for Node.js applications. PM2 provides an easy way to manage and daemonize applications (run them in the background as a service).
We will use npm
, a package manager for Node modules that installs with Node.js, to install PM2 on our server. Use this command to install PM2:
- sudo npm install -g pm2
The -g
option tells npm
to install the module globally, so that it’s available system-wide.
We will cover a few basic uses of PM2.
The first thing you will want to do is use the pm2 start
command to run your application, hello.js
, in the background:
- pm2 start hello.js
This also adds your application to PM2’s process list, which is outputted every time you start an application:
Output[PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/sammy/hello.js in fork_mode (1 instance)
[PM2] Done.
┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ hello │ default │ N/A │ fork │ 13734 │ 0s │ 0 │ online │ 0% │ 25.0mb │ sammy │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
As you can see, PM2 automatically assigns a name (based on the filename, without the .js
extension) and a PM2 id. PM2 also maintains other information, such as the PID of the process, its current status, and memory usage.
Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but an additional step needs to be taken to get the application to launch on system startup (boot or reboot). Luckily, PM2 provides an easy way to do this, the startup
subcommand.
The startup
subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots:
- pm2 startup systemd
The last line of the resulting output will include a command that you must run with superuser privileges:
Output[PM2] Init System found: systemd
[PM2] You have to run this command as root. Execute the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Run the command that was generated (similar to the highlighted output above, but with your username instead of sammy
) to set PM2 up to start on boot (use the command from your own output):
- sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
This will create a systemd unit which runs pm2
for your user on boot. This pm2
instance, in turn, runs hello.js
. You can check the status of the systemd unit with systemctl
:
- systemctl status pm2-sammy
For a detailed overview of systemd, see Systemd Essentials: Working with Services, Units, and the Journal.
PM2 provides many subcommands that allow you to manage or look up information about your applications. Note that running pm2
without any arguments will display a help page, including example usage, that covers PM2 usage in more detail than this section of the tutorial.
Stop an application with this command (specify the PM2 App name
or id
):
- pm2 stop app_name_or_id
Restart an application with this command (specify the PM2 App name
or id
):
- pm2 restart app_name_or_id
The list of applications currently managed by PM2 can also be looked up with the list
subcommand:
- pm2 list
More information about a specific application can be found by using the info
subcommand (specify the PM2 App name or id):
- pm2 info example
The PM2 process monitor can be pulled up with the monit
subcommand. This displays the application status, CPU, and memory usage:
- pm2 monit
Now that your Node.js application is running, and managed by PM2, let’s set up the reverse proxy.
Now that your application is running, and listening on localhost, you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.
In the prerequisite tutorial, we set up our Nginx configuration in the /etc/nginx/sites-available/default
file. Open the file for editing:
- sudo nano /etc/nginx/sites-available/default
Within the server
block you should have an existing location /
block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number.
. . .
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This configures the server to respond to requests at its root. Assuming our server is available at example.com
, accessing https://example.com/
via a web browser would send the request to hello.js
, listening on port 8080
at localhost.
You can add additional location
blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 8081
, you could add this location block to allow access to it via http://example.com/app2
:
location /app2 {
proxy_pass http://localhost:8081;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Once you are done adding the location blocks for your applications, save and exit.
Make sure you didn’t introduce any syntax errors by typing:
- sudo nginx -t
Next, restart Nginx:
- sudo systemctl restart nginx
Assuming that your Node.js application is running, and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your server’s URL (its public IP address or domain name).
Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on an Ubuntu 16.04 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share. Good luck with your Node.js development.
]]>Handling media assets is becoming a common requirement of modern back-end services. Using dedicated, cloud-based solutions may help when you’re dealing with massive scale or performing expensive operations, such as video transcoding. However, the extra cost and added complexity may be hard to justify when all you need is to extract a thumbnail from a video or check that user-generated content is in the correct format. Particularly at a smaller scale, it makes sense to add media processing capability directly to your Node.js API.
In this guide, you will build a media API in Node.js with Express and ffmpeg.wasm
— a WebAssembly port of the popular media processing tool. You’ll build an endpoint that extracts a thumbnail from a video as an example. You can use the same techniques to add other features supported by FFmpeg to your API.
When you’re finished, you will have a good grasp on handling binary data in Express and processing them with ffmpeg.wasm
. You’ll also handle requests made to your API that cannot be processed in parallel.
To complete this tutorial, you will need:
This tutorial was verified with Node v16.11.0, npm v7.15.1, express v4.17.1, and ffmpeg.wasm v0.10.1.
In this step, you will create a project directory, initialize Node.js and install ffmpeg
, and set up a basic Express server.
Start by opening the terminal and creating a new directory for the project:
- mkdir ffmpeg-api
Navigate to the new directory:
- cd ffmpeg-api
Use npm init
to create a new package.json
file. The -y
parameter indicates that you’re happy with the default settings for the project.
- npm init -y
Finally, use npm install
to install the packages required to build the API. The --save
flag indicates that you wish to save those as dependencies in the package.json
file.
- npm install --save @ffmpeg/ffmpeg @ffmpeg/core express cors multer p-queue
Now that you have installed ffmpeg
, you’ll set up a web server that responds to requests using Express.
First, open a new file called server.mjs
with nano
or your editor of choice:
- nano server.mjs
The code in this file will register the cors
middleware which will permit requests made from websites with a different origin. At the top of the file, import the express
and cors
dependencies:
import express from 'express';
import cors from 'cors';
Then, create an Express app and start the server on the port :3000
by adding the following code below the import
statements:
...
const app = express();
const port = 3000;
app.use(cors());
app.listen(port, () => {
console.log(`[info] ffmpeg-api listening at http://localhost:${port}`)
});
You can start the server by running the following command:
- node server.mjs
You’ll see the following output:
Output[info] ffmpeg-api listening at http://localhost:3000
When you try loading http://localhost:3000
in your browser, you’ll see Cannot GET /
. This is Express telling you it is listening for requests.
With your Express server now set up, you’ll create a client to upload the video and make requests to your Express server.
In this section, you’ll create a web page that will let you select a file and upload it to the API for processing.
Start by opening a new file called client.html
:
- nano client.html
In your client.html
file, create a file input and a Create Thumbnail button. Below, add an empty <div>
element to display errors and an image that will show the thumbnail that the API sends back. At the very end of the <body>
tag, load a script called client.js
. Your final HTML template should look as follows:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Create a Thumbnail from a Video</title>
<style>
#thumbnail {
max-width: 100%;
}
</style>
</head>
<body>
<div>
<input id="file-input" type="file" />
<button id="submit">Create Thumbnail</button>
<div id="error"></div>
<img id="thumbnail" />
</div>
<script src="client.js"></script>
</body>
</html>
Note that each element has a unique id. You’ll need them when referring to the elements from the client.js
script. The styling on the #thumbnail
element is there to ensure that the image fits on the screen when it loads.
Save the client.html
file and open client.js
:
- nano client.js
In your client.js
file, start by defining variables that store references to your HTML elements you created:
const fileInput = document.querySelector('#file-input');
const submitButton = document.querySelector('#submit');
const thumbnailPreview = document.querySelector('#thumbnail');
const errorDiv = document.querySelector('#error');
Then, attach a click event listener to the submitButton
variable to check whether you’ve selected a file:
...
submitButton.addEventListener('click', async () => {
const { files } = fileInput;
}
Next, create a function showError()
that will output an error message when a file is not selected. Add the showError()
function above your event listener:
const fileInput = document.querySelector('#file-input');
const submitButton = document.querySelector('#submit');
const thumbnailPreview = document.querySelector('#thumbnail');
const errorDiv = document.querySelector('#error');
function showError(msg) {
errorDiv.innerText = `ERROR: ${msg}`;
}
submitButton.addEventListener('click', async () => {
...
Now, you will build a function createThumbnail()
that will make a request to the API, send the video, and receive a thumbnail in response. At the top of your client.js
file, define a new constant with the URL to a /thumbnail
endpoint:
const API_ENDPOINT = 'http://localhost:3000/thumbnail';
const fileInput = document.querySelector('#file-input');
const submitButton = document.querySelector('#submit');
const thumbnailPreview = document.querySelector('#thumbnail');
const errorDiv = document.querySelector('#error');
...
You will define and use the /thumbnail
endpoint in your Express server.
Next, add the createThumbnail()
function below your showError()
function:
...
function showError(msg) {
errorDiv.innerText = `ERROR: ${msg}`;
}
async function createThumbnail(video) {
}
...
Web APIs frequently use JSON to transfer structured data from and to the client. To include a video in a JSON, you would have to encode it in base64, which would increase its size by about 30%. You can avoid this by using multipart requests instead. Multipart requests allow you to transfer structured data including binary files over http without the unnecessary overhead. You can do this using the FormData()
constructor function.
Inside the createThumbnail()
function, create an instance of FormData
and append the video file to the object. Then make a POST
request to the API endpoint using the Fetch API with the FormData()
instance as the body. Interpret the response as a binary file (or blob) and convert it to a data URL so that you can assign it to the <img>
tag you created earlier.
Here’s the full implementation of createThumbnail()
:
...
async function createThumbnail(video) {
const payload = new FormData();
payload.append('video', video);
const res = await fetch(API_ENDPOINT, {
method: 'POST',
body: payload
});
if (!res.ok) {
throw new Error('Creating thumbnail failed');
}
const thumbnailBlob = await res.blob();
const thumbnail = await blobToDataURL(thumbnailBlob);
return thumbnail;
}
...
You’ll notice createThumbnail()
has the function blobToDataURL()
in its body. This is a helper function that will convert a blob to a data URL.
Above your createThumbnail()
function, create the function blobDataToURL()
that returns a promise:
...
async function blobToDataURL(blob) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result);
reader.onerror = () => reject(reader.error);
reader.onabort = () => reject(new Error("Read aborted"));
reader.readAsDataURL(blob);
});
}
...
blobToDataURL()
uses FileReader
to read the contents of the binary file and format it as a data URL.
With the createThumbnail()
and showError()
functions now defined, you can use them to finish implementing the event listener:
...
submitButton.addEventListener('click', async () => {
const { files } = fileInput;
if (files.length > 0) {
const file = files[0];
try {
const thumbnail = await createThumbnail(file);
thumbnailPreview.src = thumbnail;
} catch(error) {
showError(error);
}
} else {
showError('Please select a file');
}
});
When a user clicks on the button, the event listener will pass the file to the createThumbnail()
function. If successful, it will assign the thumbnail to the <img>
element you created earlier. In case the user doesn’t select a file or the request fails, it will call the showError()
function to display an error.
At this point, your client.js
file will look like the following:
const API_ENDPOINT = 'http://localhost:3000/thumbnail';
const fileInput = document.querySelector('#file-input');
const submitButton = document.querySelector('#submit');
const thumbnailPreview = document.querySelector('#thumbnail');
const errorDiv = document.querySelector('#error');
function showError(msg) {
errorDiv.innerText = `ERROR: ${msg}`;
}
async function blobToDataURL(blob) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result);
reader.onerror = () => reject(reader.error);
reader.onabort = () => reject(new Error("Read aborted"));
reader.readAsDataURL(blob);
});
}
async function createThumbnail(video) {
const payload = new FormData();
payload.append('video', video);
const res = await fetch(API_ENDPOINT, {
method: 'POST',
body: payload
});
if (!res.ok) {
throw new Error('Creating thumbnail failed');
}
const thumbnailBlob = await res.blob();
const thumbnail = await blobToDataURL(thumbnailBlob);
return thumbnail;
}
submitButton.addEventListener('click', async () => {
const { files } = fileInput;
if (files.length > 0) {
const file = files[0];
try {
const thumbnail = await createThumbnail(file);
thumbnailPreview.src = thumbnail;
} catch(error) {
showError(error);
}
} else {
showError('Please select a file');
}
});
Start the server again by running:
- node server.mjs
With your client now set up, uploading the video file here will result in receiving an error message. This is because the /thumbnail
endpoint is not built yet. In the next step, you’ll create the /thumbnail
endpoint in Express to accept the video file and create the thumbnail.
In this step, you will set up a POST
request for the /thumbnail
endpoint and use middleware to accept multipart requests.
Open server.mjs
in an editor:
- nano server.mjs
Then, import multer
at the top of the file:
import express from 'express';
import cors from 'cors';
import multer from 'multer';
...
Multer is a middleware that processes incoming multipart/form-data
requests before passing them to your endpoint handler. It extracts fields and files from the body and makes them available as an array on the request object in Express. You can configure where to store the uploaded files and set limits on file size and format.
After importing it, initialize the multer
middleware with the following options:
...
const app = express();
const port = 3000;
const upload = multer({
storage: multer.memoryStorage(),
limits: { fileSize: 100 * 1024 * 1024 }
});
app.use(cors());
...
The storage
option lets you choose where to store the incoming files. Calling multer.memoryStorage()
will initialize a storage engine that keeps files in Buffer objects in memory as opposed to writing them to disk. The limits
option lets you define various limits on what files will be accepted. Set the fileSize
limit to 100MB or a different number that matches your needs and the amount of memory available on your server. This will prevent your API from crashing when the input file is too big.
Note: Due to the limitations of WebAssembly, ffmpeg.wasm
cannot handle input files over 2GB in size.
Next, set up the POST /thumbnail
endpoint itself:
...
app.use(cors());
app.post('/thumbnail', upload.single('video'), async (req, res) => {
const videoData = req.file.buffer;
res.sendStatus(200);
});
app.listen(port, () => {
console.log(`[info] ffmpeg-api listening at http://localhost:${port}`)
});
The upload.single('video')
call will set up a middleware for that endpoint only that will parse the body of a multipart request that includes a single file. The first parameter is the field name. It must match the one you gave to FormData
when creating the request in client.js
. In this case, it’s video
. multer
will then attach the parsed file to the req
parameter. The content of the file will be under req.file.buffer
.
At this point, the endpoint doesn’t do anything with the data it receives. It acknowledges the request by sending an empty 200
response. In the next step, you’ll replace that with the code that extracts a thumbnail from the video data received.
ffmpeg.wasm
In this step, you’ll use ffmpeg.wasm
to extract a thumbnail from the video file received by the POST /thumbnail
endpoint.
ffmpeg.wasm
is a pure WebAssembly and JavaScript port of FFmpeg. Its main goal is to allow running FFmpeg directly in the browser. However, because Node.js is built on top of V8 — Chrome’s JavaScript engine — you can use the library on the server too.
The benefit of using a native port of FFmpeg over a wrapper built on top of the ffmpeg
command is that if you’re planning to deploy your app with Docker, you don’t have to build a custom image that includes both FFmpeg and Node.js. This will save you time and reduce the maintenance burden of your service.
Add the following import to the top of server.mjs
:
import express from 'express';
import cors from 'cors';
import multer from 'multer';
import { createFFmpeg } from '@ffmpeg/ffmpeg';
...
Then, create an instance of ffmpeg.wasm
and start loading the core:
...
import { createFFmpeg } from '@ffmpeg/ffmpeg';
const ffmpegInstance = createFFmpeg({ log: true });
let ffmpegLoadingPromise = ffmpegInstance.load();
const app = express();
...
The ffmpegInstance
variable holds a reference to the library. Calling ffmpegInstance.load()
starts loading the core into memory asynchronously and returns a promise. Store the promise in the ffmpegLoadingPromise
variable so that you can check whether the core has loaded.
Next, define the following helper function that will use fmpegLoadingPromise
to wait for the core to load in case the first request arrives before it’s ready:
...
let ffmpegLoadingPromise = ffmpegInstance.load();
async function getFFmpeg() {
if (ffmpegLoadingPromise) {
await ffmpegLoadingPromise;
ffmpegLoadingPromise = undefined;
}
return ffmpegInstance;
}
const app = express();
...
The getFFmpeg()
function returns a reference to the library stored in the ffmpegInstance
variable. Before returning it, it checks whether the library has finished loading. If not, it will wait until ffmpegLoadingPromise
resolves. In case the first request to your POST /thumbnail
endpoint arrives before ffmpegInstance
is ready to use, your API will wait and resolve it when it can rather than rejecting it.
Now, implement the POST /thumbnail
endpoint handler. Replace res.sendStatus(200);
at the end of the end of the function with a call to getFFmpeg
to get a reference to ffmpeg.wasm
when it’s ready:
...
app.post('/thumbnail', upload.single('video'), async (req, res) => {
const videoData = req.file.buffer;
const ffmpeg = await getFFmpeg();
});
...
ffmpeg.wasm
works on top of an in-memory file system. You can read and write to it using ffmpeg.FS
. When running FFmpeg operations, you will pass virtual file names to the ffmpeg.run
function as an argument the same way as you would when working with the CLI tool. Any output files created by FFmpeg will be written to the file system for you to retrieve.
In this case, the input file is a video. The output file will be a single PNG image. Define the following variables:
...
const ffmpeg = await getFFmpeg();
const inputFileName = `input-video`;
const outputFileName = `output-image.png`;
let outputData = null;
});
...
The file names will be used on the virtual file system. outputData
is where you’ll store the thumbnail when it’s ready.
Call ffmpeg.FS()
to write the video data to the in-memory file system:
...
let outputData = null;
ffmpeg.FS('writeFile', inputFileName, videoData);
});
...
Then, run the FFmpeg operation:
...
ffmpeg.FS('writeFile', inputFileName, videoData);
await ffmpeg.run(
'-ss', '00:00:01.000',
'-i', inputFileName,
'-frames:v', '1',
outputFileName
);
});
...
The -i
parameter specifies the input file. -ss
seeks to the specified time (in this case, 1 second from the beginning of the video). -frames:v
limits the number of frames that will be written to the output (a single frame in this scenario). outputFileName
at the end indicates where will FFmpeg write the output.
After FFmpeg exits, use ffmpeg.FS()
to read the data from the file system and delete both the input and output files to free up memory:
...
await ffmpeg.run(
'-ss', '00:00:01.000',
'-i', inputFileName,
'-frames:v', '1',
outputFileName
);
outputData = ffmpeg.FS('readFile', outputFileName);
ffmpeg.FS('unlink', inputFileName);
ffmpeg.FS('unlink', outputFileName);
});
...
Finally, dispatch the output data in the body of the response:
...
ffmpeg.FS('unlink', outputFileName);
res.writeHead(200, {
'Content-Type': 'image/png',
'Content-Disposition': `attachment;filename=${outputFileName}`,
'Content-Length': outputData.length
});
res.end(Buffer.from(outputData, 'binary'));
});
...
Calling res.writeHead()
dispatches the response head. The second parameter includes custom http headers) with information about the data in the body of the request that will follow. The res.end()
function sends the data from its first argument as the body of the request and finalizes the request. The outputData
variable is a raw array of bytes as returned by ffmpeg.FS()
. Passing it to Buffer.from()
initializes a Buffer to ensure the binary data will be handled correctly by res.end()
.
At this point, your POST /thumbnail
endpoint implementation should look like this:
...
app.post('/thumbnail', upload.single('video'), async (req, res) => {
const videoData = req.file.buffer;
const ffmpeg = await getFFmpeg();
const inputFileName = `input-video`;
const outputFileName = `output-image.png`;
let outputData = null;
ffmpeg.FS('writeFile', inputFileName, videoData);
await ffmpeg.run(
'-ss', '00:00:01.000',
'-i', inputFileName,
'-frames:v', '1',
outputFileName
);
outputData = ffmpeg.FS('readFile', outputFileName);
ffmpeg.FS('unlink', inputFileName);
ffmpeg.FS('unlink', outputFileName);
res.writeHead(200, {
'Content-Type': 'image/png',
'Content-Disposition': `attachment;filename=${outputFileName}`,
'Content-Length': outputData.length
});
res.end(Buffer.from(outputData, 'binary'));
});
...
Aside from the 100MB file limit for uploads, there’s no input validation or error handling. When ffmpeg.wasm
fails to process a file, reading the output from the virtual file system will fail and prevent the response from being sent. For the purposes of this tutorial, wrap the implementation of the endpoint in a try-catch
block to handle that scenario:
...
app.post('/thumbnail', upload.single('video'), async (req, res) => {
try {
const videoData = req.file.buffer;
const ffmpeg = await getFFmpeg();
const inputFileName = `input-video`;
const outputFileName = `output-image.png`;
let outputData = null;
ffmpeg.FS('writeFile', inputFileName, videoData);
await ffmpeg.run(
'-ss', '00:00:01.000',
'-i', inputFileName,
'-frames:v', '1',
outputFileName
);
outputData = ffmpeg.FS('readFile', outputFileName);
ffmpeg.FS('unlink', inputFileName);
ffmpeg.FS('unlink', outputFileName);
res.writeHead(200, {
'Content-Type': 'image/png',
'Content-Disposition': `attachment;filename=${outputFileName}`,
'Content-Length': outputData.length
});
res.end(Buffer.from(outputData, 'binary'));
} catch(error) {
console.error(error);
res.sendStatus(500);
}
...
});
Secondly, ffmpeg.wasm
cannot handle two requests in parallel. You can try this yourself by launching the server:
- node --experimental-wasm-threads server.mjs
Note the flag required for ffmpeg.wasm
to work. The library depends on WebAssembly threads and bulk memory operations. These have been in V8/Chrome since 2019. However, as of Node.js v16.11.0, WebAssembly threads remain behind a flag in case there might be changes before the proposal is finalised. Bulk memory operations also require a flag in older versions of Node. If you’re running Node.js 15 or lower, add --experimental-wasm-bulk-memory
as well.
The output of the command will look like this:
Output[info] use ffmpeg.wasm v0.10.1
[info] load ffmpeg-core
[info] loading ffmpeg-core
[info] fetch ffmpeg.wasm-core script from @ffmpeg/core
[info] ffmpeg-api listening at http://localhost:3000
[info] ffmpeg-core loaded
Open client.html
in a web browser and select a video file. When you click the Create Thumbnail button, you should see the thumbnail appear on the page. Behind the scenes, the site uploads the video to the API, which processes it and responds with the image. However, when you click the button repeatedly in quick succession, the API will handle the first request. The subsequent requests will fail:
OutputError: ffmpeg.wasm can only run one command at a time
at Object.run (.../ffmpeg-api/node_modules/@ffmpeg/ffmpeg/src/createFFmpeg.js:126:13)
at file://.../ffmpeg-api/server.mjs:54:26
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
In the next section, you’ll learn how to deal with concurrent requests.
Since ffmpeg.wasm
can only execute a single operation at a time, you’ll need a way of serializing requests that come in and processing them one at a time. In this scenario, a promise queue is a perfect solution. Instead of starting to process each request right away, it will be queued up and processed when all the requests that arrived before it have been handled.
Open server.mjs
in your preferred editor:
- nano server.mjs
Import p-queue
at the top of server.mjs
:
import express from 'express';
import cors from 'cors';
import { createFFmpeg } from '@ffmpeg/ffmpeg';
import PQueue from 'p-queue';
...
Then, create a new queue at the top of server.mjs
file under the variable ffmpegLoadingPromise
:
...
const ffmpegInstance = createFFmpeg({ log: true });
let ffmpegLoadingPromise = ffmpegInstance.load();
const requestQueue = new PQueue({ concurrency: 1 });
...
In the POST /thumbnail
endpoint handler, wrap the calls to ffmpeg in a function that will be queued up:
...
app.post('/thumbnail', upload.single('video'), async (req, res) => {
try {
const videoData = req.file.buffer;
const ffmpeg = await getFFmpeg();
const inputFileName = `input-video`;
const outputFileName = `thumbnail.png`;
let outputData = null;
await requestQueue.add(async () => {
ffmpeg.FS('writeFile', inputFileName, videoData);
await ffmpeg.run(
'-ss', '00:00:01.000',
'-i', inputFileName,
'-frames:v', '1',
outputFileName
);
outputData = ffmpeg.FS('readFile', outputFileName);
ffmpeg.FS('unlink', inputFileName);
ffmpeg.FS('unlink', outputFileName);
});
res.writeHead(200, {
'Content-Type': 'image/png',
'Content-Disposition': `attachment;filename=${outputFileName}`,
'Content-Length': outputData.length
});
res.end(Buffer.from(outputData, 'binary'));
} catch(error) {
console.error(error);
res.sendStatus(500);
}
});
...
Every time a new request comes in, it will only start processing when there’s nothing else queued up in front of it. Note that the final sending of the response can happen asynchronously. Once the ffmpeg.wasm
operation finishes running, another request can start processing while the response goes out.
To test that everything works as expected, start up the server again:
- node --experimental-wasm-threads server.mjs
Open the client.html
file in your browser and try uploading a file.
With the queue in place, the API will now respond every time. The requests will be handled sequentially in the order in which they arrive.
In this article, you built a Node.js service that extracts a thumbnail from a video using ffmpeg.wasm
. You learned how to upload binary data from the browser to your Express API using multipart requests and how to process media with FFmpeg in Node.js without relying on external tools or having to write data to disk.
FFmpeg is an incredibly versatile tool. You can use the knowledge from this tutorial to take advantage of any features that FFmpeg supports and use them in your project. For example, to generate a three-second GIF, change the ffmpeg.run
call to this on the POST /thumbnail
endpoint:
...
await ffmpeg.run(
'-y',
'-t', '3',
'-i', inputFileName,
'-filter_complex', 'fps=5,scale=720:-1:flags=lanczos[x];[x]split[x1][x2];[x1]palettegen[p];[x2][p]paletteuse',
'-f', 'gif',
outputFileName
);
...
The library accepts the same parameters as the original ffmpeg
CLI tool. You can use the official documentation to find a solution for your use case and test it quickly in the terminal.
Thanks to ffmpeg.wasm
being self-contained, you can dockerize this service using the stock Node.js base images and scale your service up by keeping multiple nodes behind a load balancer. Follow the tutorial How To Build a Node.js Application with Docker to learn more.
If your use case requires performing more expensive operations, such as transcoding large videos, make sure that you run your service on machines with enough memory to store them. Due to current limitations in WebAssembly, the maximum input file size cannot exceed 2GB, although this might change in the future.
Additionally, ffmpeg.wasm
cannot take advantage of some x86 assembly optimizations from the original FFmpeg codebase. That means some operations can take a long time to finish. If that’s the case, consider whether this is the right solution for your use case. Alternatively, make requests to your API asynchronous. Instead of waiting for the operation to finish, queue it up and respond with a unique ID. Create another endpoint that the clients can query to find out whether the processing ended and the output file is ready. Learn more about the asynchronous request-reply pattern for REST APIs and how to implement it.
Node.js is a JavaScript runtime for server-side programming. It allows developers to create scalable backend functionality using JavaScript, a language many are already familiar with from browser-based web development.
In this guide, we will show you three different ways of getting Node.js installed on an Ubuntu 16.04 server:
apt
to install the nodejs
package from Ubuntu’s default software repositoryapt
with an alternate PPA software repository to install specific versions of the nodejs
packagenvm
, the Node Version Manager, and using it to install and manage multiple versions of Node.jsFor many users, using apt
with the default repo will be sufficient. If you need specific newer (or legacy) versions of Node, you should use the PPA repository. If you are actively developing Node applications and need to switch between node
versions frequently, choose the nvm
method.
This guide assumes that you are using Ubuntu 16.04. Before you begin, you should have a non-root user account with sudo
privileges set up on your system. You can learn how to do this by following the Ubuntu 16.04 initial server setup tutorial.
Warning: the version of Node.js included with Ubuntu 16.04, version 4.2.6 is now unsupported and unmaintained. You should not use this version, and should refer to one of the other sections in this tutorial to install a more recent version of Node.
To get this version, you can use the apt-get
package manager. Refresh your local package index first by typing:
- sudo apt-get update
Then install Node.js:
- sudo apt-get install nodejs
Check that the install was successful by querying node
for its version number:
- nodejs -v
Outputv4.2.6
If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. In most cases, you’ll also want to also install npm
, the Node.js package manager. You can do this by installing the npm
package with apt
:
- sudo apt-get install npm
This will allow you to install modules and packages to use with Node.js.
At this point you have successfully installed Node.js and npm
using apt-get
and the default Ubuntu software repositories. The next section will show how to use an alternate repository to install different versions of Node.js.
To install a different version of Node.js, you can use a PPA (personal package archive) maintained by NodeSource. These PPAs have more versions of Node.js available than the official Ubuntu repositories. Node.js v12, v14, and v16 are available as of the time of writing.
First, we will install the PPA in order to get access to its packages. From your home directory, use curl
to retrieve the installation script for your preferred version, making sure to replace 16.x
with your preferred version string (if different).
- cd ~
- curl -sL https://deb.nodesource.com/setup_16.x -o nodesource_setup.sh
Refer to the NodeSource documentation for more information on the available versions.
Inspect the contents of the downloaded script with nano
(or your preferred text editor):
- nano nodesource_setup.sh
When you are satisfied that the script is safe to run, exit your editor, then run the script with sudo
:
- sudo bash nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. You can now install the Node.js package in the same way you did in the previous section:
- sudo apt-get install nodejs
Verify that you’ve installed the new version by running node
with the -v
version flag:
- node -v
Outputv16.10.0
The NodeSource nodejs
package contains both the node
binary and npm
, so you don’t need to install npm
separately.
At this point you have successfully installed Node.js and npm
using apt
and the NodeSource PPA. The next section will show how to use the Node Version Manager to install and manage multiple versions of Node.js.
Another way of installing Node.js that is particularly flexible is to use nvm, the Node Version Manager. This piece of software allows you to install and maintain many different independent versions of Node.js, and their associated Node packages, at the same time.
To install NVM on your Ubuntu 16.04 machine, visit the project’s GitHub page. Copy the curl
command from the README file that displays on the main page. This will get you the most recent version of the installation script.
Before piping the command through to bash
, it is always a good idea to audit the script to make sure it isn’t doing anything you don’t agree with. You can do that by removing the | bash
segment at the end of the curl
command:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh
Take a look and make sure you are comfortable with the changes it is making. When you are satisfied, run the command again with | bash
appended at the end. The URL you use will change depending on the latest version of nvm, but as of right now, the script can be downloaded and executed by typing:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
This will install the nvm
script to your user account. To use it, you must first source your .bashrc
file:
- source ~/.bashrc
Now, you can ask NVM which versions of Node are available:
- nvm list-remote
Output. . .
v14.16.0 (LTS: Fermium)
v14.16.1 (LTS: Fermium)
v14.17.0 (LTS: Fermium)
v14.17.1 (LTS: Fermium)
v14.17.2 (LTS: Fermium)
v14.17.3 (LTS: Fermium)
v14.17.4 (Latest LTS: Fermium)
v15.0.0
v15.0.1
v15.1.0
v15.2.0
v15.2.1
v15.3.0
v15.4.0
v15.5.0
v15.5.1
v15.6.0
v15.7.0
v15.8.0
v15.9.0
v15.10.0
v15.11.0
v15.12.0
v15.13.0
v15.14.0
v16.0.0
v16.1.0
v16.2.0
It’s a very long list! You can install a version of Node by typing any of the release versions you see. For instance, to get version v14.10.1, you can type:
- nvm install v14.10.1
You can see the different versions you have installed by typing:
nvm list
Output-> v14.10.1
system
default -> v14.10.1 (-> N/A)
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v14.10.1) (default)
stable -> 14.10 (-> v14.10.1) (default))
. . .
This shows the currently active version on the first line (-> v14.10.1
), followed by some named aliases and the versions that those aliases point to.
Note: if you also have a version of Node.js installed through apt
, you may see a system
entry here. You can always activate the system-installed version of Node using nvm use system
.
Additionally, you’ll see aliases for the various long-term support (or LTS) releases of Node:
Output. . .
lts/* -> lts/fermium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.4 (-> N/A)
lts/fermium -> v14.17.4 (-> N/A)
We can install a release based on these aliases as well. For instance, to install the latest long-term support version, fermium
, run the following:
- nvm install lts/fermium
OutputDownloading and installing node v14.17.4...
. . .
Now using node v14.17.4 (npm v6.14.14))
You can switch between installed versions with nvm use
:
- nvm use v14.10.0
OutputNow using node v14.10.0 (npm v6.14.8)
```
You can verify that the install was successful using the same technique from the other sections, by typing:
```command
node -v
Outputv14.10.0
The correct version of Node is installed on our machine as we expected. A compatible version of npm
is also available.
There are a quite a few ways to get up and running with Node.js on your Ubuntu 16.04 server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in Ubuntu’s repository is the easiest method, using nvm
or a NodeSource PPA offers additional flexibility.
For more information on programming with Node.js, please refer to our tutorial series How To Code in Node.js.
]]>Etherpad is a web application that enables real-time collaborative text editing in the browser. It is written in Node.js and can be self-hosted on a variety of platforms and operating systems.
In this tutorial we will install Etherpad on an Ubuntu 20.04 server, using the SQLite database engine to store our data. We’ll also install and configure Nginx to act as a reverse proxy for the application, and we’ll fetch and install SSL certificates from the Let’s Encrypt certificate authority to enable secure HTTPS connections to our Etherpad instance.
Note: If you’d prefer to use our App Platform service to self-host Etherpad, please see our Deploy the Etherpad Collaborative Web Editor to App Platform tutorial, where we create an App Platform app to run the Etherpad Docker container, and connect it to a managed PostgreSQL database.
Before starting this tutorial, you will need the following:
sudo
-enabled user, and with the UFW firewall enabled. Please read our Initial Server Setup with Ubuntu 20.04 to learn more about setting up these requirements.example.com
or etherpad.example.com
, for instance.Note: If you’re using DigitalOcean, our DNS Documentation can help you get your domain name set up in the control panel.
When you have the prerequisites in place, continue to Step 1, where we’ll download and configure the Etherpad application.
To install Etherpad, you’ll need to download the source code, install dependencies, and configure systemd to run the server.
The Etherpad maintainers recommend running the software as its own user, so your first step will be to create a new etherpad user using the adduser
command:
- sudo adduser --system --group --home /opt/etherpad etherpad
This creates a --system
user, meaning that it can’t log in directly and has no password or shell assigned. We give it a home directory of /opt/etherpad
, which is where we’ll download and configure the Etherpad software. We also create an etherpad group using the --group
flag.
You now need to run a few commands as the etherpad user. To do so, you’ll use the sudo
command to open a bash
shell as the etherpad user. Then you’ll change directories (cd
) to /opt/etherpad
:
- sudo -u etherpad bash
- cd /opt/etherpad
Your shell prompt will update to show that you’re the etherpad user. It should look similar to etherpad@host:~$
.
Now clone the Etherpad repository into /opt/etherpad
using Git:
- git clone --branch master https://github.com/ether/etherpad-lite.git .
This will pull the master
branch of the Etherpad source code into the current directory (.
). When that’s done, run Etherpad’s installDeps.sh
script to install the dependencies:
- ./bin/installDeps.sh
This can take a minute. When it’s done, you’ll need to manually install one last dependency. We need to cd
into the Etherpad src
folder and install the sqlite3
package in order to use SQLite as our database.
First, change into the src
directory:
- cd src
Then install the sqlite3
package using npm
:
- npm install sqlite3
Your final task as the etherpad user is to update the Etherpad settings.json
file to use SQLite for its database, and to work well with Nginx. Move back up to the /opt/etherpad
directory:
- cd /opt/etherpad
Then open the settings file using your favorite text editor:
- nano settings.json
The file is formatted as JSON, but with extensive comments throughout explaining each setting. There’s a lot you can configure, but for now we’re interested in two values that update the database configuration:
"dbType": "dirty",
"dbSettings": {
"filename": "var/dirty.db"
},
Scroll down and look for the dbType
and dbSettings
section, shown here. Update the settings to sqlite
and a filename of your choice, like the following:
"dbType": "sqlite",
"dbSettings": {
"filename": "var/sqlite.db"
},
Finally, scroll down some more, find the trustProxy
setting, and update it to true
:
"trustProxy": true,
Save and close the settings file. In nano
you can save and close by typing CTRL+O
then ENTER
to save, and CTRL+X
to exit.
When that’s done, be sure to exit the etherpad user’s shell:
- exit
You’ll be returned to your normal user’s shell.
Etherpad is installed and configured. Next we’ll create a systemd service to start and manage the Etherpad process.
In order to start Etherpad on boot and to manage the process using systemctl
, we need to create a systemd service file. Open up the new file in your favorite text editor:
- sudo nano /etc/systemd/system/etherpad.service
We’re going to create a service definition based on information in Etherpad’s documentation wiki. The How to deploy Etherpad Lite as a service page gives an example configuration that needs just a few changes to make it work for us.
Add the following content into your text editor, then save and close the file:
[Unit]
Description=Etherpad, a collaborative web editor.
After=syslog.target network.target
[Service]
Type=simple
User=etherpad
Group=etherpad
WorkingDirectory=/opt/etherpad
Environment=NODE_ENV=production
ExecStart=/usr/bin/node --experimental-worker /opt/etherpad/node_modules/ep_etherpad-lite/node/server.js
Restart=always
[Install]
WantedBy=multi-user.target
This file gives systemd the information it needs to run Etherpad, including the user and group to run it as, and the command used to start the process (ExecStart=...
).
After closing the file, reload the systemd daemon to pull in the new configuration:
- sudo systemctl daemon-reload
Next, enable the etherpad
service. This means the service will start up whenever your server reboots:
- sudo systemctl enable etherpad
And finally, we can start the service:
- sudo systemctl start etherpad
Check that the service started properly using systemctl status
:
- sudo systemctl status etherpad
Output● etherpad.service - Etherpad, a collaborative web editor.
Loaded: loaded (/etc/systemd/system/etherpad.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-09-09 14:12:43 UTC; 18min ago
Main PID: 698 (node)
Tasks: 13 (limit: 1136)
Memory: 152.0M
CGroup: /system.slice/etherpad.service
└─698 /usr/bin/node --experimental-worker /opt/etherpad/node_modules/ep_etherpad-lite/node/server.js
The output should indicate that the service is active (running)
.
Etherpad is now running, but it is unavailable to the public because port 9001
is blocked by your firewall. In the next step we’ll make Etherpad public by setting up Nginx as a reverse proxy in front of the Etherpad process.
Putting a web server such as Nginx in front of your Node.js server can improve performance by offloading caching, compression, and static file serving to a more efficient process. We’re going to install Nginx and configure it to proxy requests to Etherpad, meaning it will take care of handing requests from your users to Etherpad and back again.
First, refresh your package list, then install Nginx using apt
:
- sudo apt update
- sudo apt install nginx
Allow traffic to ports 80
and 443
(HTTP and HTTPS) using the “Nginx Full” UFW application profile:
- sudo ufw allow "Nginx Full"
OutputRule added
Rule added (v6)
Next, open up a new Nginx configuration file in the /etc/nginx/sites-available
directory. We’ll call ours etherpad.conf
but you could use a different name:
- sudo nano /etc/nginx/sites-available/etherpad.conf
Paste the following into the new configuration file, being sure to replace your_domain_here
with the domain that is pointing to your Etherpad server. This will be something like etherpad.example.com
, for instance.
server {
listen 80;
listen [::]:80;
server_name your_domain_here;
access_log /var/log/nginx/etherpad.access.log;
error_log /var/log/nginx/etherpad.error.log;
location / {
proxy_pass http://127.0.0.1:9001;
proxy_buffering off;
proxy_set_header Host $host;
proxy_pass_header Server;
# proxy headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
# websocket proxying
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
This configuration is loosely based on a configuration provided on the Etherpad wiki. It is HTTP-only for now, as we’ll let Certbot take care of configuring SSL in the next step. The rest of the config sets up logging locations and then passes all traffic along to http://127.0.0.1:9001
, the Etherpad instance we started up in the previous step. We also set various headers that are required for well-behaved proxying and for websockets (persistent HTTP connections that enable real-time two-way communication) to work through a proxy.
Save and close the file, then enable the configuration by linking it into /etc/nginx/sites-enabled/
:
- sudo ln -s /etc/nginx/sites-available/etherpad.conf /etc/nginx/sites-enabled/
Use nginx -t
to verify that the configuration file syntax is correct:
- sudo nginx -t
[secondary_lable Output]
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
And finally, reload the nginx
service to pick up the new configuration:
- sudo systemctl reload nginx
Your Etherpad site should now be available on plain HTTP, and it will look something like this:
Now that we have our site up and running over HTTP, it’s time to secure the connection with Certbot and Let’s Encrypt certificates.
Thanks to Certbot and the Let’s Encrypt free certificate authority, adding SSL encryption to our Etherpad app will take only two commands.
First, install Certbot and its Nginx plugin:
- sudo apt install certbot python3-certbot-nginx
Next, run certbot
in --nginx
mode, and specify the same domain you used in the Nginx server_name
config:
- sudo certbot --nginx -d your_domain_here
You’ll be prompted to agree to the Let’s Encrypt terms of service, and to enter an email address.
Afterwards, you’ll be asked if you want to redirect all HTTP traffic to HTTPS. It’s up to you, but this is generally recommended and safe to do.
After that, Let’s Encrypt will confirm your request and Certbot will download your certificate:
OutputCongratulations! You have successfully enabled https://etherpad.example.com
You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=etherpad.example.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/etherpad.example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/etherpad.example.com/privkey.pem
Your cert will expire on 2021-12-06. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Certbot will automatically reload Nginx to pick up the new configuration and certificates. Reload your site and it should switch you over to HTTPS automatically if you chose the redirect option.
You’re done! Try out your new Etherpad editor and invite some collaborators.
In this tutorial we set up Etherpad, with Nginx and Let’s Encrypt SSL certificates. Your Etherpad is now ready to use, but there’s more configuration you may want to do, including adding authenticated users, adding plugins, and customizing the user interface through skins.
Your SQLite-backed Etherpad instance will be able to handle a moderate number of active users, but if you anticipate very high traffic, you may want to look into configuring a MySQL or PostgreSQL database instead.
All of these configuration options are documented on the official Etherpad wiki.
]]>Digital image processing is a method of using a computer to analyze and manipulate images. The process involves reading an image, applying methods to alter or enhance the image, and then saving the processed image. It’s common for applications that handle user-uploaded content to process images. For example, if you’re writing a web application that allows users to upload images, users may upload unnecessary large images. This can negatively impact the application load speed, and also waste your server space. With image processing, your application can resize and compress all the user-uploaded images, which can significantly improve your application performance and save your server disk space.
Node.js has an ecosystem of libraries you can use to process images, such as sharp, jimp, and gm module. This article will focus on the sharp module. sharp is a popular Node.js image processing library that supports various image file formats, such as JPEG, PNG, GIF, WebP, AVIF, SVG and TIFF.
In this tutorial, you’ll use sharp to read an image and extract its metadata, resize, change an image format, and compress an image. You will then crop, grayscale, rotate, and blur an image. Finally, you will composite images, and add text on an image. By the end of this tutorial, you’ll have a good understanding of how to process images in Node.js.
To complete this tutorial, you’ll need:
Node.js set up in your local development environment. You can follow How to Install Node.js and Create a Local Development Environment to learn how to install Node.js and npm on your system.
Basic knowledge of how to write, and run a Node.js program. You can follow How To Write and Run Your First Program in Node.js to learn the basics.
Basic understanding of asynchronous programming in JavaScript. Follow Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript to review asynchronous programming.
Before you start writing your code, you need to create the directory that will contain the code and the images you’ll use in this article.
Open your terminal and create the directory for the project using the mkdir
command:
- mkdir process_images
Move into the newly created directory using the cd
command:
- cd process_images
Create a package.json
file using npm init
command to keep track of the project dependencies:
- npm init -y
The -y
option tells npm
to create the default package.json
file.
Next, install sharp
as a dependency:
- npm install sharp
You will use the following three images in this tutorial:
Next, download the images in your project directory using the curl
command.
Use the following command to download the first image. This will download the image as sammy.png
:
- curl -O https://assets.digitalocean.com/how-to-process-images-in-node-js-with-sharp/sammy.png
Next, download the second image with the following command. This will download the image as underwater.png
:
- curl -O https://assets.digitalocean.com/how-to-process-images-in-node-js-with-sharp/underwater.png
Finally, download the third image using the following command. This will download the image as sammy-transparent.png
:
- curl -O https://assets.digitalocean.com/how-to-process-images-in-node-js-with-sharp/sammy-transparent.png
With the project directory and the dependencies set up, you’re now ready to start processing images.
In this section, you’ll write code to read an image and extract its metadata. Image metadata is text embedded into an image, which includes information about the image such as its type, width, and height.
To extract the metadata, you’ll first import the sharp module, create an instance of sharp
, and pass the image path as an argument. After that, you’ll chain the metadata()
method to the instance to extract the metadata and log it into the console.
To do this, create and open readImage.js
file in your preferred text editor. This tutorial uses a terminal text editor called nano
:
- nano readImage.js
Next, require in sharp
at the top of the file:
const sharp = require("sharp");
sharp is a promise-based image processing module. When you create a sharp
instance, it returns a promise. You can resolve the promise using the then
method or use async/await
, which has a cleaner syntax.
To use async/await
syntax, you’ll need to create an asynchronous function by placing the async
keyword at the beginning of the function. This will allow you to use the await
keyword inside the function to resolve the promise returned when you read an image.
In your readImage.js
file, define an asynchronous function, getMetadata()
, to read the image, extract its metadata, and log it into the console:
const sharp = require("sharp");
async function getMetadata() {
const metadata = await sharp("sammy.png").metadata();
console.log(metadata);
}
getMetadata()
is an asynchronous function given the async
keyword you defined before the function
label. This lets you use the await
syntax within the function. The getMetadata()
function will read an image and return an object with its metadata.
Within the function body, you read the image by calling sharp()
which takes the image path as an argument, here with sammy.png
.
Apart from taking an image path, sharp()
can also read image data stored in a Buffer, Uint8Array, or Uint8ClampedArray provided the image is JPEG, PNG, GIF, WebP, AVIF, SVG or TIFF.
Now, when you use sharp()
to read the image, it creates a sharp
instance. You then chain the metadata()
method of the sharp module to the instance. The method returns an object containing the image metadata, which you store in the metadata
variable and log its contents using console.log()
.
Your program can now read an image and return its metadata. However, if the program throws an error during execution, it will crash. To get around this, you need to capture the errors when they occur.
To do that, wrap the code within the getMetadata()
function inside a try...catch
block:
const sharp = require("sharp");
async function getMetadata() {
try {
const metadata = await sharp("sammy.png").metadata();
console.log(metadata);
} catch (error) {
console.log(`An error occurred during processing: ${error}`);
}
}
Inside the try
block, you read an image, extract and log its metadata. When an error occurs during this process, execution skips to the catch
section and logs the error preventing the program from crashing.
Finally, call the getMetadata()
function by adding the highlighted line:
const sharp = require("sharp");
async function getMetadata() {
try {
const metadata = await sharp("sammy.png").metadata();
console.log(metadata);
} catch (error) {
console.log(`An error occurred during processing: ${error}`);
}
}
getMetadata();
Now, save and exit the file. Enter y
to save the changes you made in the file, and confirm the file name by pressing ENTER
or RETURN
key.
Run the file using the node
command:
- node readImage.js
You should see an output similar to this:
Output{
format: 'png',
width: 750,
height: 483,
space: 'srgb',
channels: 3,
depth: 'uchar',
density: 72,
isProgressive: false,
hasProfile: false,
hasAlpha: false
}
Now that you’ve read an image and extracted its metadata, you’ll now resize an image, change its format, and compress it.
Resizing is the process of altering an image dimension without cutting anything from it, which affects the image file size. In this section, you’ll resize an image, change its image type, and compress the image. Image compression is the process of reducing an image file size without losing quality.
First, you’ll chain the resize()
method from the sharp
instance to resize the image, and save it in the project directory. Second, you’ll chain the format()
method to the resized image to change its format from png
to jpeg
. Additionally, you will pass an option to the format()
method to compress the image and save it to the directory.
Create and open resizeImage.js
file in your text editor:
- nano resizeImage.js
Add the following code to resize the image to 150px
width and 97px
height:
const sharp = require("sharp");
async function resizeImage() {
try {
await sharp("sammy.png")
.resize({
width: 150,
height: 97
})
.toFile("sammy-resized.png");
} catch (error) {
console.log(error);
}
}
resizeImage();
The resizeImage()
function chains the sharp module’s resize()
method to the sharp
instance. The method takes an object as an argument. In the object, you set the image dimensions you want using the width
and height
property. Setting the width
to 150
and the height
to 97
will make the image 150px
wide, and 97px
tall.
After resizing the image, you chain the sharp module’s toFile()
method, which takes the image path as an argument. Passing sammy-resized.png
as an argument will save the image file with that name in the working directory of your program.
Now, save and exit the file. Run your program in the terminal:
- node resizeImage.js
You will get no output, but you should see a new image file created with the name sammy-resized.png
in the project directory.
Open the image on your local machine. You should see an image of Sammy 150px
wide and 97px
tall:
Now that you can resize an image, next you’ll convert the resized image format from png
to jpeg
, compress the image, and save it in the working directory. To do that, you will use toFormat()
method, which you’ll chain after the resize()
method.
Add the highlighted code to change the image format to jpeg
and compress it:
const sharp = require("sharp");
async function resizeImage() {
try {
await sharp("sammy.png")
.resize({
width: 150,
height: 97
})
.toFormat("jpeg", { mozjpeg: true })
.toFile("sammy-resized-compressed.jpeg");
} catch (error) {
console.log(error);
}
}
resizeImage();
Within the resizeImage()
function, you use the toFormat()
method of the sharp module to change the image format and compress it. The first argument of the toFormat()
method is a string containing the image format you want to convert your image to. The second argument is an optional object containing output options that enhance and compress the image.
To compress the image, you pass it a mozjpeg
property that holds a boolean value. When you set it to true
, sharp uses mozjpeg defaults to compress the image without sacrificing quality. The object can also take more options; see the sharp documentation for more details.
Note: Regarding the toFormat()
method’s second argument, each image format takes an object with different properties. For example, mozjpeg
property is accepted only on JPEG
images.
However, other image formats have equivalents options such quality
, compression
, and lossless
. Make sure to refer to the documentation to know what kind of options are acceptable for the image format you are compressing.
Next, you pass the toFile()
method a different filename to save the compressed image as sammy-resized-compressed.jpeg
.
Now, save and exit the file, then run your code with the following command:
- node resizeImage.js
You will receive no output, but an image file sammy-resized-compressed.jpeg
is saved in your project directory.
Open the image on your local machine and you will see the following image:
With your image now compressed, check the file size to confirm your compression is successful. In your terminal, run the du
command to check the file size for sammy.png
:
- du -h sammy.png
-h
option produces human-readable output showing you the file size in kilobytes, megabytes and many more.
After running the command, you should see an output similar to this:
Output120K sammy.png
The output shows that the original image is 120 kilobytes.
Next, check the file size for sammy-resized.png
:
- du -h sammy-resized.png
After running the command, you will see the following output:
Output8.0K sammy-resized.png
sammy-resized.png
is 8 kilobytes down from 120 kilobytes. This shows that the resizing operation affects the file size.
Now, check the file size for sammy-resized-compressed.jpeg
:
- du -h sammy-resized-compressed.jpeg
After running the command, you will see the following output:
Output4.0K sammy-resized-compressed.jpeg
The sammy-resized-compressed.jpeg
is now 4 kilobytes down from 8 kilobytes, saving you 4 kilobytes, showing that the compression worked.
Now that you’ve resized an image, changed its format and compressed it, you will crop and grayscale the image.
In this step, you will crop an image, and convert it to grayscale. Cropping is the process of removing unwanted areas from an image. You’ll use the extend()
method to crop the sammy.png
image. After that, you’ll chain the grayscale()
method to the cropped image instance and convert it to grayscale.
Create and open cropImage.js
in your text editor:
- nano cropImage.js
In your cropImage.js
file, add the following code to crop the image:
const sharp = require("sharp");
async function cropImage() {
try {
await sharp("sammy.png")
.extract({ width: 500, height: 330, left: 120, top: 70 })
.toFile("sammy-cropped.png");
} catch (error) {
console.log(error);
}
}
cropImage();
The cropImage()
function is an asynchronous function that reads an image and returns your image cropped. Within the try
block, a sharp
instance will read the image. Then, the sharp module’s extract()
method chained to the instance takes an object with the following properties:
width
: the width of the area you want to crop.height
: the height of the area you want to crop.top
: the vertical position of the area you want to crop.left
: the horizontal position of the area you want to crop.When you set the width
to 500
and the height
to 330
, imagine that sharp creates a transparent box on top of the image you want to crop. Any part of the image that fits in the box will remain, and the rest will be cut:
The top
and left
properties control the position of the box. When you set left
to 120
, the box is positioned 120px from the left edge of the image, and setting top
to 70
positions the box 70px from the top edge of the image.
The area of the image that fits within the box will be extracted out and saved into sammy-cropped.png
as a separate image.
Save and exit the file. Run the program in the terminal:
- node cropImage.js
The output won’t be shown but the image sammy-cropped.png
will be saved in your project directory.
Open the image on your local machine. You should see the image cropped:
Now that you cropped an image, you will convert the image to grayscale. To do that, you’ll chain the grayscale
method to the sharp
instance. Add the highlighted code to convert the image to grayscale:
const sharp = require("sharp");
async function cropImage() {
try {
await sharp("sammy.png")
.extract({ width: 500, height: 330, left: 120, top: 70 })
.grayscale()
.toFile("sammy-cropped-grayscale.png");
} catch (error) {
console.log(error);
}
}
cropImage();
The cropImage()
function converts the cropped image to grayscale by chaining the sharp module’s grayscale()
method to the sharp
instance. It then saves the image in the project directory as sammy-cropped-grayscale.png
.
Press CTRL+X
to save and exit the file.
Run your code in the terminal:
- node cropImage.js
Open sammy-cropped-grayscale.png
on your local machine. You should now see the image in grayscale:
Now that you’ve cropped and extracted the image, you’ll work with rotating and blurring it.
In this step, you’ll rotate the sammy.png
image at a 33
degrees angle. You’ll also apply a gaussian blur on the rotated image. A gaussian blur is a technique of blurring an image using the Gaussian function, which reduces the noise level and detail on an image.
Create a rotateImage.js
file in your text editor:
- nano rotateImage.js
In your rotateImage.js
file, write the following code block to create a function that rotates sammy.png
to an angle of 33
degrees:
const sharp = require("sharp");
async function rotateImage() {
try {
await sharp("sammy.png")
.rotate(33, { background: { r: 0, g: 0, b: 0, alpha: 0 } })
.toFile("sammy-rotated.png");
} catch (error) {
console.log(error);
}
}
rotateImage();
The rotateImage()
function is an asynchronous function that reads an image and will return the image rotated to an angle of 33
degrees. Within the function, the rotate()
method of the sharp module takes two arguments. The first argument is the rotation angle of 33
degrees. By default, sharp makes the background of the rotated image black. To remove the black background, you pass an object as a second argument to make the background transparent.
The object has a background
property which holds an object defining the RGBA color model. RGBA stands for red, green, blue, and alpha.
r
: controls the intensity of the red color. It accepts an integer value of 0
to 255
. 0
means the color is not being used, and 255
is red at its highest.
g
: controls the intensity of the green color. It accepts an integer value of 0-255
. 0
means that the color green is not used, and 255
is green at its highest.
b
: controls the intensity of blue
. It also accepts an integer value between 0
and 255
. 0
means that the blue color isn’t used, and 255
is blue at its highest.
alpha
: controls the opacity of the color defined by r
, g
, and b
properties. 0
or 0.0
makes the color transparent and 1
or 1.1
makes the color opaque.
For the alpha
property to work, you must make sure you define and set the values for r
, g
, and b
. Setting the r
, g
, and b
values to 0
creates a black color. To create a transparent background, you must define a color first, then you can set alpha
to 0
to make it transparent.
Now, save and exit the file. Run your script in the terminal:
- node rotateImage.js
Check for the existence of sammy-rotated.png
in your project directory. Open it on your local machine.
You should see the image rotated to an angle of 33
degrees:
Next, you’ll blur the rotated image. You’ll achieve that by chaining the blur()
method to the sharp
instance.
Enter the highlighted code below to blur the image:
const sharp = require("sharp");
async function rotateImage() {
try {
await sharp("sammy.png")
.rotate(33, { background: { r: 0, g: 0, b: 0, alpha: 0 } })
.blur(4)
.toFile("sammy-rotated-blurred.png");
} catch (error) {
console.log(error);
}
}
rotateImage();
The rotateImage()
function now reads the image, rotate it, and applies a gaussian blur to the image. It applies a gaussian blur to the image using the sharp module’s blur()
method. The method accepts a single argument containing a sigma value between 0.3
and 1000
. Passing it 4
will apply a gaussian blur with a sigma value of 4
. After the image is blurred, you define a path to save the blurred image.
Your script will now blur the rotated image with a sigma value of 4
. Save and exit the file, then run the script in your terminal:
- node rotateImage.js
After running the script, open sammy-rotated-blurred.png
file on your local machine. You should now see the rotated image blurred:
Now that you’ve rotated and blurred an image, you’ll composite an image over another.
composite()
Image Composition is a process of combining two or more separate pictures to create a single image. This is done to create effects that borrow the best elements from the different photos. Another common use case is to watermark an image with a logo.
In this section, you’ll composite sammy-transparent.png
over the underwater.png
. This will create an illusion of sammy swimming deep in the ocean. To composite the images, you’ll chain the composite()
method to the sharp
instance.
Create and open the file compositeImage.js
in your text editor:
- nano compositeImages.js
Now, create a function to composite the two images by adding the following code in the compositeImages.js
file:
const sharp = require("sharp");
async function compositeImages() {
try {
await sharp("underwater.png")
.composite([
{
input: "sammy-transparent.png",
top: 50,
left: 50,
},
])
.toFile("sammy-underwater.png");
} catch (error) {
console.log(error);
}
}
compositeImages()
The compositeImages()
function reads the underwater.png
image first. Next, you chain the composite()
method of the sharp module, which takes an array as an argument. The array contains a single object that reads the sammy-transparent.png
image. The object has the following properties:
input
: takes the path of the image you want to composite over the processed image. It also accepts a Buffer, Uint8Array, or Uint8ClampedArray as input.top
: controls the vertical position of the image you want to composite over. Setting top
to 50
offsets the sammy-transparent.png
image 50px from the top edge of the underwater.png
image.left
: controls the horizontal position of the image you want to composite over another. Setting left
to 50
offsets the sammy-transparent.png
50px from the left edge of the underwater.png
image.The composite()
method requires an image of similar size or smaller to the processed image.
To visualize what the composite()
method is doing, think of it like its creating a stack of images. The sammy-transparent.png
image is placed on top of underwater.png
image:
The top
and left
values positions the sammy-transparent.png
image relative to the underwater.png
image.
Save your script and exit the file. Run your script to create an image composition:
node compositeImages.js
Open sammy-underwater.png
in your local machine. You should now see the sammy-transparent.png
composited over the underwater.png
image:
You’ve now composited images using the composite()
method. In the next step, you’ll use the composite()
method to add text to an image.
In this step, you’ll write text on an image. At the time of writing, sharp doesn’t have a native way of adding text to an image. To add text, first, you’ll write code to draw text using Scalable Vector Graphics(SVG). Once you’ve created the SVG image, you’ll write code to composite the image with the sammy.png
image using the composite
method.
SVG is an XML-based markup language for creating vector graphics for the web. You can draw text, or shapes such as circles, triangles, and as well as draw complex shapes such as illustrations, logos, etc. The complex shapes are created with a graphic tool like Inkscape which generates the SVG code. The SVG shapes can be rendered and scaled to any size without losing quality.
Create and open the addTextOnImage.js
file in your text editor.
- nano addTextOnImage.js
In your addTextOnImage.js
file, add the following code to create an SVG container:
const sharp = require("sharp");
async function addTextOnImage() {
try {
const width = 750;
const height = 483;
const text = "Sammy the Shark";
const svgImage = `
<svg width="${width}" height="${height}">
</svg>
`;
} catch (error) {
console.log(error);
}
}
addTextOnImage();
The addTextOnImage()
function defines four variables: width
, height
, text
, and svgImage
. width
holds the integer 750
, and height
holds the integer 483
. text
holds the string Sammy the Shark
. This is the text that you’ll draw using SVG.
The svgImage
variable holds the svg
element. The svg
element has two attributes: width
and height
that interpolates the width
and height
variables you defined earlier. The svg
element creates a transparent container according to the given width and height.
You gave the svg
element a width
of 750
and height
of 483
so that the SVG image will have the same size as sammy.png
. This will help in making the text look centered on the sammy.png
image.
Next, you’ll draw the text graphics. Add the highlighted code to draw Sammy the Shark
on the SVG container:
async function addTextOnImage() {
...
const svg = `
<svg width="${width}" height="${height}">
<text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
</svg>
`;
....
}
The SVG text
element has four attributes: x
, y
, text-anchor
, and class
. x
and y
define the position for the text you are drawing on the SVG container. The x
attribute positions the text horizontally, and the y
attribute positions the text vertically.
Setting x
to 50%
draws the text in the middle of the container on the x-axis, and setting y
to 50%
positions the text in the middle on y-axis of the SVG image.
The text-anchor
aligns text horizontally. Setting text-anchor
to middle
will align the text on the center at the x
coordinate you specified.
class
defines a class name on the text
element. You’ll use the class name to apply CSS styles to the text
element.
${text}
interpolates the string Sammy the Shark
stored in the text
variable. This is the text that will be drawn on the SVG image.
Next, add the highlighted code to style the text using CSS:
const svg = `
<svg width="${width}" height="${height}">
<style>
.title { fill: #001; font-size: 70px; font-weight: bold;}
</style>
<text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
</svg>
`;
In this code, fill
changes the text color to black, font-size
changes the font size, and font-weight
changes the font weight.
At this point, you have written the code necessary to draw the text Sammy the Shark
with SVG. Next, you’ll save the SVG image as a png
with sharp so that you can see how SVG is drawing the text. Once that is done, you’ll composite the SVG image with sammy.png
.
Add the highlighted code to save the SVG image as a png
with sharp:
....
const svgImage = `
<svg width="${width}" height="${height}">
...
</svg>
`;
const svgBuffer = Buffer.from(svgImage);
const image = await sharp(svgBuffer).toFile("svg-image.png");
} catch (error) {
console.log(error);
}
}
addTextOnImage();
Buffer.from()
creates a Buffer object from the SVG image. A buffer is a temporary space in memory that stores binary data.
After creating the buffer object, you create a sharp
instance with the buffer object as input. In addition to an image path, sharp also accepts a buffer, Uint9Array, or Uint8ClampedArray.
Finally, you save the SVG image in the project directory as svg-image.png
.
Here is the complete code:
const sharp = require("sharp");
async function addTextOnImage() {
try {
const width = 750;
const height = 483;
const text = "Sammy the Shark";
const svgImage = `
<svg width="${width}" height="${height}">
<style>
.title { fill: #001; font-size: 70px; font-weight: bold;}
</style>
<text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
</svg>
`;
const svgBuffer = Buffer.from(svgImage);
const image = await sharp(svgBuffer).toFile("svg-image.png");
} catch (error) {
console.log(error);
}
}
addTextOnImage()
Save and exit the file, then run your script with the following command:
node addTextOnImage.js
Note: If you installed Node.js using Option 2 — Installing Node.js with Apt Using a NodeSource PPA or Option 3 — Installing Node Using the Node Version Manager and getting the error fontconfig error: cannot load default config file: no such file: (null)
, install fontconfig
to generate the font configuration file.
Update your server’s package index, and after that, use apt install
to install fontconfig
.
- sudo apt update
- sudo apt install fontconfig
Open svg-image.png
on your local machine. You should now see the text Sammy the Shark
rendered with a transparent background:
Now that you’ve confirmed the SVG code draws the text, you will composite the text graphics onto sammy.png
.
Add the following highlighted code to composite the SVG text graphics image onto the sammy.png
image.
const sharp = require("sharp");
async function addTextOnImage() {
try {
const width = 750;
const height = 483;
const text = "Sammy the Shark";
const svgImage = `
<svg width="${width}" height="${height}">
<style>
.title { fill: #001; font-size: 70px; font-weight: bold;}
</style>
<text x="50%" y="50%" text-anchor="middle" class="title">${text}</text>
</svg>
`;
const svgBuffer = Buffer.from(svgImage);
const image = await sharp("sammy.png")
.composite([
{
input: svgBuffer,
top: 0,
left: 0,
},
])
.toFile("sammy-text-overlay.png");
} catch (error) {
console.log(error);
}
}
addTextOnImage();
The composite()
method reads the SVG image from the svgBuffer
variable, and positions it 0
pixels from the top, and 0
pixels from the left edge of the sammy.png
. Next, you save the composited image as sammy-text-overlay.png
.
Save and close your file, then run your program using the following command:
- node addTextOnImage.js
Open sammy-text-overlay.png
on your local machine. You should see text added over the image:
You have now used the composite()
method to add text created with SVG on another image.
In this article, you learned how to use sharp methods to process images in Node.js. First, you created an instance to read an image and used the metadata()
method to extract the image metadata. You then used the resize()
method to resize an image. Afterwards, you used the format()
method to change the image type, and compress the image. Next, you proceeded to use various sharp methods to crop, grayscale, rotate, and blur an image. Finally, you used the composite()
method to composite an image, and add text on an image.
For more insight into additional sharp methods, visit the sharp documentation. If you want to continue learning Node.js, see How To Code in Node.js series.
]]>The MERN stack consists of MongoDB, Express, React / Redux, and Node.js. The MERN stack is one of the most popular JavaScript stacks for building modern single-page web applications.
In this tutorial, you will build a todo application that uses a RESTful API, which you will also build later in this tutorial.
To complete this tutorial, you will need:
Note: This tutorial was originally written to use the mLab service to host a MongoDB database. mLab has been closed to new account creation since February 2019 and they suggest using MongoDB Atlas.
It is also possible to follow the installation instructions and run MongoDB locally, but this tutorial will not cover that process and is left as-is for educational purposes.
You will also need a code editor that you are familiar with, preferably one that has support for JavaScript code highlighting.
Downloading and installing a tool like Postman is recommended for testing API endpoints.
This tutorial was verified with Node v14.2.0, npm
v6.14.5, mongodb-community
v4.2.6, express
v4.17.1, and mongoose
v5.9.17.
Let’s start with the setup. Open your terminal and create a new file directory in any convenient location on your local machine. You can name it anything but in this example, it is called mern-todo
.
- mkdir mern-todo
Now, enter into that file directory:
- cd mern-todo
The next step is to initialize the project with a package.json
file. This file will contain some information about your app and the dependencies that it needs to run.
You can use:
- npm init
And follow the instructions when prompted. Or you can use:
- npm init -y
To use the default values.
To run your JavaScript code on the backend, you need to spin up a server that will compile your code.
The server can be created in two ways: first is to use the built-in http
module in Node; second is to make use of the Express.js framework.
This tutorial will use Express.js. It is a Node.js HTTP framework that handles a lot of things out of the box and requires little code to create fully functional RESTful APIs. To use Express, install it using npm:
- npm install express
Now, create a file index.js
and type the following code into it and save:
const express = require('express');
const app = express();
const port = process.env.PORT || 5000;
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
next();
});
app.use((req, res, next) => {
res.send('Welcome to Express');
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
This snippet from the preceeding code helps handle CORS related issues that you might face when trying to access the API from different domains during development and testing:
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
next();
});
It’s time to start your server to see if it works. Open your terminal in the same directory as your index.js
file and type:
- node index.js
If everything goes well, you will see Server running on port 5000 in your terminal.
There are three things that the app needs to do:
For each task, you will need to create routes that will define multiple endpoints that the todo app will depend on. So let’s create a folder routes
and create a file api.js
with the following code in it.
- mkdir routes
Edit api.js
and paste the following code in it:
const express = require('express');
const router = express.Router();
router.get('/todos', (req, res, next) => {
// get placeholder
});
router.post('/todos', (req, res, next) => {
// post placeholder
});
router.delete('/todos/:id', (req, res, next) => {
// delete placeholder
});
module.exports = router;
This provides placeholder routes for GET, POST, and DELETE.
Now, comes the interesting part. Since the app is going to make use of MongoDB which is a NoSQL database, we need to create a model and a schema. Models are defined using the schema interface. The schema allows you to define the fields stored in each document along with their validation requirements and default values. In essence, the schema is a blueprint of how the database will be constructed. In addition, you can define static and instance helper methods to make it easier to work with your data types, and also virtual properties that you can use like any other field, but which aren’t stored in the database.
To create a schema and a model, install Mongoose which is a Node package that makes working with MongoDB easier.
- # ensure that you are in the `mern-todo` project directory
- npm install mongoose
Create a new folder in your root directory and name it models
. Inside it create a file and name it todo.js
with the following code in it:
- mkdir models
Paste the following into todo.js
with your text editor:
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
// Create schema for todo
const TodoSchema = new Schema({
action: {
type: String,
required: [true, 'The todo text field is required'],
},
});
// Create model for todo
const Todo = mongoose.model('todo', TodoSchema);
module.exports = Todo;
Now, we need to update our routes to make use of the new model.
const express = require('express');
const router = express.Router();
const Todo = require('../models/todo');
router.get('/todos', (req, res, next) => {
// This will return all the data, exposing only the id and action field to the client
Todo.find({}, 'action')
.then((data) => res.json(data))
.catch(next);
});
router.post('/todos', (req, res, next) => {
if (req.body.action) {
Todo.create(req.body)
.then((data) => res.json(data))
.catch(next);
} else {
res.json({
error: 'The input field is empty',
});
}
});
router.delete('/todos/:id', (req, res, next) => {
Todo.findOneAndDelete({ _id: req.params.id })
.then((data) => res.json(data))
.catch(next);
});
module.exports = router;
You will need a database where you will store your data. For this, you will make use of mLab. Follow the documentation to get started with mLab.
After setting up your database you need to update index.js
file with the following code:
const express = require('express');
const bodyParser = require('body-parser');
const mongoose = require('mongoose');
const routes = require('./routes/api');
require('dotenv').config();
const app = express();
const port = process.env.PORT || 5000;
// Connect to the database
mongoose
.connect(process.env.DB, { useNewUrlParser: true })
.then(() => console.log(`Database connected successfully`))
.catch((err) => console.log(err));
// Since mongoose's Promise is deprecated, we override it with Node's Promise
mongoose.Promise = global.Promise;
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
next();
});
app.use(bodyParser.json());
app.use('/api', routes);
app.use((err, req, res, next) => {
console.log(err);
next();
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Note: In versions prior to Express 4.16+ it was necessary to rely upon middleware like body-parser
. However, it is now possible to use the built-in parser.
If you are using an older version of Express, use npm to install body-parser
:
- npm install body-parser
In the preceeding code made use of process.env
to access environment variables, which need to be created. Create a file in your root directory with the name .env
and edit:
DB = 'mongodb://<USER>:<PASSWORD>@example.mlab.com:port/todo'
Make sure you use your own MongoDB URL from mLab after you created your database and user. Replace <USER>
with the username and <PASSWORD>
with the password of the user you created.
To work with environment variable you will have to install a Node package called dotenv
that makes sure you have access to environment variable stored in the .env
file.
- # ensure that you are in the `mern-todo` project directory
- npm install dotenv
Then require and configure it in index.js
:
require('dotenv').config()
Using environment variables instead of writing credentials to the application code directly can hide sensitive information from your versioning system. It is considered a best practice to separate configuration and secret data from application code in this manner.
This is the part we start trying out things to make sure your RESTful API is working. Since your frontend is not ready yet, you can make use of some API development clients to test your code.
You can use Postman or Insomnia or your preferred client for testing APIs.
Start your server using the command:
- node index.js
Now, open your client, create a GET method and navigate to http://localhost:5000/api/todos
.
Test all the API endpoints and make sure they are working. For the endpoints that require body
, send JSON back with the necessary fields since it’s what you set up in your code.
Sample POST request:
POST localhost:5000/api/todos
Body
raw
Sample POST value:
{
"action": "build a mern stack application"
}
Sample GET request:
GET localhost:5000/api/todos
Sample GET response:
Output[
{
"id": "5bd4edfc89d4c3228e1bbe0a",
"action": "build a mern stack application"
}
]
Sample DELETE request:
DELETE localhost:5000/api/todos/5bd4edfc89d4c3228e1bbe0ad
Test and observe the results of GET, POST, and DELETE.
Since you are done with the functionality you want from your API, it is time to create an interface for the client to interact with the API. To start out with the frontend of the todo app, you will use the create-react-app
command to scaffold your app.
In the same root directory as your backend code, which is the mern-todo
directory, run:
- npx create-react-app client
This will create a new folder in your mern-todo
directory called client
, where you will add all the React code.
Before testing the React app, there are many dependencies that need to be installed in the project root directory.
First, install concurrently
as a dev dependency:
- npm install concurrently --save-dev
Concurrently is used to run more than one command simultaneously from the same terminal window.
Then, install nodemon
as a dev dependency:
- npm install nodemon --save-dev
Nodemon is used to run the server and monitor it as well. If there is any change in the server code, Nodemon will restart it automatically with the new changes.
Next, open your package.json
file in the root folder of the app project, and paste the following code:
{
// ...
"scripts": {
"start": "node index.js",
"start-watch": "nodemon index.js",
"dev": "concurrently \"npm run start-watch\" \"cd client && npm start\""
},
// ...
}
Enter into the client folder, then locate the package.json
file and add the following key-value pair inside it.
{
// ...
"proxy": "http://localhost:5000"
}
This proxy setup in our package.json
file will enable you to make API calls without having to type the full URL, just /api/todos
will get all your todos
Open your terminal and run npm run dev
and make sure you are in the todo
directory and not in the client
directory.
Your app will be open and running on localhost:3000
.
One of the advantages of React is that it makes use of components, which are reusable and also makes code modular. For your todo app, there will be two state components and one stateless component.
Inside your src
folder create another folder called components
and inside it create three files Input.js
, ListTodo.js
, and Todo.js
.
Open Input.js
file and paste the following:
import React, { Component } from 'react';
import axios from 'axios';
class Input extends Component {
state = {
action: '',
};
addTodo = () => {
const task = { action: this.state.action };
if (task.action && task.action.length > 0) {
axios
.post('/api/todos', task)
.then((res) => {
if (res.data) {
this.props.getTodos();
this.setState({ action: '' });
}
})
.catch((err) => console.log(err));
} else {
console.log('input field required');
}
};
handleChange = (e) => {
this.setState({
action: e.target.value,
});
};
render() {
let { action } = this.state;
return (
<div>
<input type="text" onChange={this.handleChange} value={action} />
<button onClick={this.addTodo}>add todo</button>
</div>
);
}
}
export default Input;
To make use of axios, which is a Promise-based HTTP client for the browser and Node.js, you will need to navigate to your client
directory from your terminal:
- cd client
And run npm install axios
:
- npm install axios
After that, open your ListTodo.js
file and paste the following code:
import React from 'react';
const ListTodo = ({ todos, deleteTodo }) => {
return (
<ul>
{todos && todos.length > 0 ? (
todos.map((todo) => {
return (
<li key={todo._id} onClick={() => deleteTodo(todo._id)}>
{todo.action}
</li>
);
})
) : (
<li>No todo(s) left</li>
)}
</ul>
);
};
export default ListTodo;
Then, in your Todo.js
file you write the following code:
import React, { Component } from 'react';
import axios from 'axios';
import Input from './Input';
import ListTodo from './ListTodo';
class Todo extends Component {
state = {
todos: [],
};
componentDidMount() {
this.getTodos();
}
getTodos = () => {
axios
.get('/api/todos')
.then((res) => {
if (res.data) {
this.setState({
todos: res.data,
});
}
})
.catch((err) => console.log(err));
};
deleteTodo = (id) => {
axios
.delete(`/api/todos/${id}`)
.then((res) => {
if (res.data) {
this.getTodos();
}
})
.catch((err) => console.log(err));
};
render() {
let { todos } = this.state;
return (
<div>
<h1>My Todo(s)</h1>
<Input getTodos={this.getTodos} />
<ListTodo todos={todos} deleteTodo={this.deleteTodo} />
</div>
);
}
}
export default Todo;
You will need to make a little adjustment to your React code. Delete the logo and adjust your App.js
to look like this:
import React from 'react';
import Todo from './components/Todo';
import './App.css';
const App = () => {
return (
<div className="App">
<Todo />
</div>
);
};
export default App;
Then paste the following code into App.css
:
.App {
text-align: center;
font-size: calc(10px + 2vmin);
width: 60%;
margin-left: auto;
margin-right: auto;
}
input {
height: 40px;
width: 50%;
border: none;
border-bottom: 2px #101113 solid;
background: none;
font-size: 1.5rem;
color: #787a80;
}
input:focus {
outline: none;
}
button {
width: 25%;
height: 45px;
border: none;
margin-left: 10px;
font-size: 25px;
background: #101113;
border-radius: 5px;
color: #787a80;
cursor: pointer;
}
button:focus {
outline: none;
}
ul {
list-style: none;
text-align: left;
padding: 15px;
background: #171a1f;
border-radius: 5px;
}
li {
padding: 15px;
font-size: 1.5rem;
margin-bottom: 15px;
background: #282c34;
border-radius: 5px;
overflow-wrap: break-word;
cursor: pointer;
}
@media only screen and (min-width: 300px) {
.App {
width: 80%;
}
input {
width: 100%
}
button {
width: 100%;
margin-top: 15px;
margin-left: 0;
}
}
@media only screen and (min-width: 640px) {
.App {
width: 60%;
}
input {
width: 50%;
}
button {
width: 30%;
margin-left: 10px;
margin-top: 0;
}
}
Also in index.css
add the following rules:
body {
margin: 0;
padding: 0;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
box-sizing: border-box;
background-color: #282c34;
color: #787a80;
}
code {
font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New", monospace;
}
Assuming no errors when saving all these files, the todo app will be ready and fully functional with the functionality discussed earlier: creating a task, deleting a task, and viewing all your tasks.
In this tutorial, you created a todo app using the MERN stack. You wrote a frontend application using React that communicates with a backend application written using Express.js. You also created a MongoDB backend for storing tasks in a database.
]]>Rate limiting manages your network’s traffic and limits the number of times someone repeats an operation in a given duration, such as using an API. A service without a layer of security against rate limit abuse is prone to overload and hampers your application’s proper operation for legitimate customers.
In this tutorial, you will build a Node.js server that will check the IP address of the request and also calculate the rate of these requests by comparing the timestamp of requests per user. If an IP address crosses the limit you have set for the application, you will call Cloudflare’s API and add the IP address to a list. You will then configure a Cloudflare Firewall Rule that will ban all requests with IP addresses in the list.
By the end of this tutorial, you will have built a Node.js project deployed on DigitalOcean’s App Platform that protects a Cloudflare routed domain with rate limiting.
Before you begin this guide, you will need:
In this step, you will expand on your basic Express server, push your code to a GitHub repository, and deploy your application to App Platform.
Open the project directory of the basic Express server with your code editor. Create a new file by the name .gitignore
in the root directory of the project. Add the following lines to the newly created .gitignore
file:
node_modules/
.env
The first line in your .gitignore
file is a directive to git not to track the node_modules
directory. This will enable you to keep your repository size small. The node_modules
can be generated when required by running the command npm install
. The second line prevents the environment variable file from being tracked. You will create the .env
file in further steps.
Navigate to your server.js
in your code editor and modify the following lines of code:
...
app.listen(process.env.PORT || 3000, () => {
console.log(`Example app is listening on port ${process.env.PORT || 3000}`);
});
The change to conditionally use PORT
as an environment variable enables the application to dynamically have the server running on the assigned PORT
or use 3000
as the fallback one.
Note: The string in console.log()
is wrapped within backticks(`) and not within quotes. This enables you to use template literals, which provides the capability to have expressions within strings.
Visit your terminal window and run your application:
- node server.js
Your browser window will display Successful response
. In your terminal, you will see the following output:
OutputExample app is listening on port 3000
With your Express server running successfully, you’ll now deploy to App Platform.
First, initialize git
in the root directory of the project and push the code to your GitHub account. Navigate to the App Platform dashboard in the browser and click on the Create App button. Choose the GitHub option and authorize with GitHub, if necessary. Select your project’s repository from the dropdown list of projects you want to deploy to App Platform. Review the configuration, then give a name to the application. For the purpose of this tutorial, select the Basic plan as you’ll work in the application’s development phase. Once ready, click Launch App.
Next, navigate to the Settings tab and click on the section Domains. Add your domain routed via Cloudflare into the field Domain or Subdomain Name. Select the bullet You manage your domain to copy the CNAME
record that you’ll use to add to your domain’s Cloudflare DNS account.
With your application deployed to App Platform, head over to your domain’s dashboard on Cloudflare in a new tab as you will return to App Platform’s dashboard later. Navigate to the DNS tab. Click on the Add Record button and select CNAME as your Type, @ as the root, and paste in the CNAME
you copied from the App Platform. Click on the Save button, then navigate to the Domains section under the Settings tab in your App Platform’s Dashboard and click on the Add Domain button.
Click the Deployments tab to see the details of the deployment. Once deployment finishes, you can open your_domain
to view it on the browser. Your browser window will display: Successful response
. Navigate to the Runtime Logs tab on the App Platform dashboard, and you will get the following output:
OutputExample app is listening on port 8080
Note: The port number 8080
is the default assigned port by the App Platform. You can override this by changing the configuration while reviewing the app before deployment.
With your application now deployed to App Platform, let’s look at how to outline a cache to calculate requests to the rate limiter.
In this step, you will store a user’s IP address in a cache with an array of timestamps to monitor the requests per second of each user’s IP address. A cache is temporary storage for data frequently used by an application. The data in a cache is usually kept in quick access hardware like RAM (Random-Access Memory). The fundamental goal of a cache is to improve data retrieval performance by decreasing the need to visit the slower storage layer underneath it. You will use three npm packages: node-cache
, is-ip
, and request-ip
to aid in the process.
The request-ip
package captures the user’s IP address used to request the server. The node-cache
package creates an in-memory cache which you will use to keep track of user’s requests. You’ll use the is-ip
package used to check if an IP Address is IPv6 Address. Install the node-cache
, is-ip
, and request-ip
package via npm on your terminal.
- npm i node-cache is-ip request-ip
Open the server.js
file in your code editor and add following lines of code below const express = require('express');
:
...
const requestIP = require('request-ip');
const nodeCache = require('node-cache');
const isIp = require('is-ip');
...
The first line here grabs the requestIP
module from request-ip
package you installed. This module captures the user’s IP address used to request the server. The second line grabs the nodeCache
module from the node-cache
package. nodeCache
creates an in-memory cache, which you will use to keep track of user’s requests per second. The third line takes the isIp
module from the is-ip
package. This checks if an IP address is IPv6 which you will format as per Cloudflare’s specification to use CIDR notation.
Define a set of constant variables in your server.js
file. You will use these constants throughout your application.
...
const TIME_FRAME_IN_S = 10;
const TIME_FRAME_IN_MS = TIME_FRAME_IN_S * 1000;
const MS_TO_S = 1 / 1000;
const RPS_LIMIT = 2;
...
TIME_FRAME_IN_S
is a constant variable that will determine the period over which your application will average the user’s timestamps. Increasing the period will increase the cache size, hence consume more memory. The TIME_FRAME_IN_MS
constant variable will also determine the period of time your application will average user’s timestamps, but in milliseconds. MS_TO_S
is the conversion factor you will use to convert time in milliseconds to seconds. The RPS_LIMIT
variable is the threshold limit of the application that will trigger the rate limiter, and change the value as per your application’s requirements. The value 2
in the RPS_LIMIT
variable is a moderate value that will trigger during the development phase.
With Express, you can write and use middleware functions, which have access to all HTTP requests coming to your server. To define a middleware function, you will call app.use()
and pass it a function. Create a function named ipMiddleware
as middleware.
...
const ipMiddleware = async function (req, res, next) {
let clientIP = requestIP.getClientIp(req);
if (isIp.v6(clientIP)) {
clientIP = clientIP.split(':').splice(0, 4).join(':') + '::/64';
}
next();
};
app.use(ipMiddleware);
...
The getClientIp()
function provided by requestIP
takes the request object, req
from the middleware, as parameter. The .v6()
function comes from the is-ip
module and returns true
if the argument passed to it is an IPv6 address. Cloudflare’s Lists requires the IPv6 address in /64
CIDR notation. You need to format the IPv6 address to follow the format: aaaa:bbbb:cccc:dddd::/64
. The .split(':')
method creates an array from the string containing the IP address splitting them by the character :
. The .splice(0,4)
method returns the first four elements of the array. The .join(':')
method returns a string from the array combined with the character :
.
The next()
call directs the middleware to go to the next middleware function if there is one. In your example, it will take the request to the GET route /
. This is important to include at the end of your function. Otherwise, the request will not move forward from the middleware.
Initialize an instance of node-cache
by adding the following variable below the constants:
...
const IPCache = new nodeCache({ stdTTL: TIME_FRAME_IN_S, deleteOnExpire: false, checkperiod: TIME_FRAME_IN_S });
...
With the constant variable IPCache
, you are overriding the default parameters native to nodeCache
with the custom properties:
stdTTL
: The interval in seconds after which a key-value pair of cache elements will be evicted from the cache. TTL
stands for Time To Live, and is a measure of time after which cache expires.deleteOnExpire
: Set to false
as you will write a custom callback function to handle the expired
event.checkperiod
: The interval in seconds after which an automatic check for expired elements is triggered. The default value is 600
, and as your application’s element expiry is set to a lesser value, the check for expiry will also happen sooner.For more information on the default parameters of node-cache
, you will find the node-cache npm package’s docs page useful. The following diagram will help you to visualise how a cache stores data:
You will now create a new key-value pair for the new IP address and append to an existing key-value pair if an IP address exists in the cache. The value is an array of timestamps corresponding to each request made to your application. In your server.js
file, create the updateCache()
function below the IPCache
constant variable to add the timestamp of the request to cache:
...
const updateCache = (ip) => {
let IPArray = IPCache.get(ip) || [];
IPArray.push(new Date());
IPCache.set(ip, IPArray, (IPCache.getTtl(ip) - Date.now()) * MS_TO_S || TIME_FRAME_IN_S);
};
...
The first line in the function gets the array of timestamps for the given IP address, or if null, initializes with an empty array. In the following line, you are pushing the present timestamp caught by the new Date()
function into the array. The .set()
function provided by node-cache
takes three arguments: key
, value
and the TTL
. This TTL
will override the standard TTL set by replacing the value of stdTTL
from the IPCache
variable. If the IP address already exists in the cache, you will use the existing TTL; else, you will set TTL as TIME_FRAME_IN_S
.
The TTL for the current key-value pair is calculated by subtracting the present timestamp from the expiry timestamp. The difference is then converted to seconds and passed as the third argument to the .set()
function. The .getTtl()
function takes a key and IP address as an argument and returns the TTL of the key-value pair as a timestamp. If the IP address does not exist in the cache, it will return undefined
and use the fallback value of TIME_FRAME_IN_S
.
Note: You require the conversion timestamps from milliseconds to seconds as JavaScript stores them in milliseconds while the node-cache
module uses seconds.
In the ipMiddleware
middleware, add the following lines after the if
code block if (isIp.v6(clientIP))
to calculate the requests per second of the IP address calling your application:
...
updateCache(clientIP);
const IPArray = IPCache.get(clientIP);
if (IPArray.length > 1) {
const rps = IPArray.length / ((IPArray[IPArray.length - 1] - IPArray[0]) * MS_TO_S);
if (rps > RPS_LIMIT) {
console.log('You are hitting limit', clientIP);
}
}
...
The first line adds the timestamp of the request made by the IP address to the cache by calling the updateCache()
function you declared. The second line collects the array of timestamps for the IP address. If the number of elements in the array of timestamps is greater than one (calculating requests per second needs a minimum of two timestamps), and the requests per second are more than the threshold value you defined in the constants, you will console.log
the IP address. The rps
variable calculates the requests per second by dividing the number of requests with a time interval difference, and converts the units to seconds.
Since you had defaulted the property deleteOnExpire
to the value false
in the IPCache
variable, you will now need to handle the expired
event manually. node-cache
provides a callback function that triggers on expired
event. Add the following lines of code below the IPCache
constant variable:
...
IPCache.on('expired', (key, value) => {
if (new Date() - value[value.length - 1] > TIME_FRAME_IN_MS) {
IPCache.del(key);
}
});
...
.on()
is a callback function that accepts key
and value
of the expired element as the arguments. In your cache, value
is an array of timestamps of requests. The highlighted line checks if the last element in the array is at least TIME_FRAME_IN_S
in the past than the present time. As you are adding elements to your array of timestamps, if the last element in value
is at least TIME_FRAME_IN_S
in the past than the present time, the .del()
function takes key
as an argument and deletes the expired element from the cache.
For the instances when some elements of the array are at least TIME_FRAME_IN_S
in the past than the present time, you need to handle it by removing the expired items from the cache. Add the following code in the callback function after the if
code block if (new Date() - value[value.length - 1] > TIME_FRAME_IN_MS)
.
...
else {
const updatedValue = value.filter(function (element) {
return new Date() - element < TIME_FRAME_IN_MS;
});
IPCache.set(key, updatedValue, TIME_FRAME_IN_S - (new Date() - updatedValue[0]) * MS_TO_S);
}
...
The filter()
array method native to JavaScript provides a callback function to filter the elements in your array of timestamps. In your case, the highlighted line checks for elements that are least TIME_FRAME_IN_S
in the past than the present time. The filtered elements are then added to the updatedValue
variable. This will update your cache with the filtered elements in the updatedValue
variable and a new TTL. The TTL that matches the first element in the updatedValue
variable will trigger the .on('expired')
callback function when the cache removes the following element. The difference of TIME_FRAME_IN_S
and the time expired since the first request’s timestamp in updatedValue
calculates the new and updated TTL.
With your middleware functions now defined, visit your terminal window and run your application:
- node server.js
Then, visit localhost:3000
in your web browser. Your browser window will display: Successful response
. Refresh the page repeatedly to hit the RPS_LIMIT
. Your terminal window will display:
OutputExample app is listening on port 3000
You are hitting limit ::1
Note: The IP address for localhost is shown as ::1
. Your application will capture the public IP of a user when deployed outside localhost.
Your application is now able to able to track the user’s requests and store the timestamps in the cache. In the next step, you will integrate Cloudflare’s API to set up the Firewall.
In this step, you will set up Cloudflare’s Firewall to block IP Addresses when hitting the rate limit, create environment variables, and make calls to the Cloudflare API.
Visit the Cloudflare dashboard in your browser, log in, and navigate to your account’s homepage. Open Lists under Configurations tab. Create a new List with your_list
as the name.
Note: The Lists section is available on your Cloudflare account’s dashboard page and not your Cloudflare domain’s dashboard page.
Navigate to the Home tab and open your_domain
's dashboard. Open the Firewall tab and click on Create a Firewall rule under the Firewall Rules section. Give your_rule_name
to the Firewall to identify it. In the Field, select IP Source Address
from the dropdown, is in list
for the Operator, and your_list
for the Value. Under the dropdown for Choose an action, select Block and click Deploy.
Create a .env
file in the project’s root directory with the following lines to call Cloudflare API from your application:
ACCOUNT_MAIL=your_cloudflare_login_mail
API_KEY=your_api_key
ACCOUNT_ID=your_account_id
LIST_ID=your_list_id
To get a value for API_KEY
, navigate to the API Tokens tab on the My Profile section of your Cloudflare dashboard. Click View in the Global API Key section and enter your Cloudflare password to view it. Visit the Lists section under the Configurations tab on the account’s homepage. Click on Edit beside your_list
list you created. Get the ACCOUNT_ID
and LIST_ID
from the URL of your_list
in the browser. The URL is of the format below:
https://dash.cloudflare.com/your_account_id/configurations/lists/your_list_id
Warning: Make sure the content of .env
is kept confidential and not made public. Make sure you have the .env
file listed in the .gitignore
file you created in Step 1.
<$>
Install the axios
and dotenv
package via npm on your terminal.
- npm i axios dotenv
Open the server.js
file in your code editor and the add following lines of code below the nodeCache
constant variable:
...
const axios = require('axios');
require('dotenv').config();
...
The first line here grabs the axios
module from axios
package you installed. You will use this module to make network calls to Cloudflare’s API. The second line requires and configures the dotenv
module to enable the process.env
global variable that will define the values you placed in your .env
file to server.js
.
Add the following to the if (rps > RPS_LIMIT)
condition within ipMiddleware
above console.log('You are hitting limit', clientIP)
to call Cloudflare API.
...
const url = `https://api.cloudflare.com/client/v4/accounts/${process.env.ACCOUNT_ID}/rules/lists/${process.env.LIST_ID}/items`;
const body = [{ ip: clientIP, comment: 'your_comment' }];
const headers = {
'X-Auth-Email': process.env.ACCOUNT_MAIL,
'X-Auth-Key': process.env.API_KEY,
'Content-Type': 'application/json',
};
try {
await axios.post(url, body, { headers });
} catch (error) {
console.log(error);
}
...
You are now calling the Cloudflare API through the URL to add an item, in this case an IP address, to your_list
. The Cloudflare API takes your ACCOUNT_MAIL
and API_KEY
in the header of the request with the key as X-Auth-Email
and X-Auth-Key
. The body of the request takes an array of objects with ip
as the IP address to add to the list, and a comment
with the value your_comment
to identify the entry. You can modify value of comment
with your own custom comment. The POST request made via axios.post()
is wrapped in a try-catch block to handle errors if any, that may occur. The axios.post
function takes the url
, body
and an object with headers
to make the request.
Change the clientIP
variable within the ipMiddleware
function when testing out the API requests with a test IP address like 198.51.100.0/24
as Cloudflare does not accept the localhost’s IP address in its Lists.
...
let clientIP = '198.51.100.0/24';
...
Visit your terminal window and run your application:
- node server.js
Then, visit localhost:3000
in your web browser. Your browser window will display: Successful response
. Refresh the page repeatedly to hit the RPS_LIMIT
. Your terminal window will display:
OutputExample app is listening on port 3000
You are hitting limit ::1
When you have hit the limit, open the Cloudflare dashboard and navigate to the your_list
's page. You will see the IP address you put in the code added to your Cloudflare’s List named your_list
. The Firewall page will display after pushing your changes to GitHub.
<$>[warning]
Warning: Make sure to change the value in your clientIP
variable to requestIP.getClientIp(req)
before deploying or pushing the code to GitHub.
Deploy your application by committing the changes and pushing the code to GitHub. As you have set up auto-deploy, the code from GitHub will automatically deploy to your DigitalOcean’s App Platform. As your .env
file is not added to GitHub, you will need to add it to App Platform via the Settings tab at App-Level Environment Variables section. Add the key-value pair from your project’s .env
file so your application can access its contents on the App Platform. After you save the environment variables, open your_domain
in your browser after deployment finishes and refresh the page repeatedly to hit the RPS_LIMIT
. Once you hit the limit, the browser will show Cloudflare’s Firewall page.
Navigate to the Runtime Logs tab on the App Platform dashboard, and you will view the following output:
Output...
You are hitting limit your_public_ip
You can open your_domain
from a different device or via VPN to see that the Firewall bans only the IP address in your_list
. You can delete the IP address from your_list
through your Cloudflare dashboard.
Note: Occasionally, it takes few seconds for the Firewall to trigger due to the cached response from the browser.
You have set up Cloudflare’s Firewall to block IP Addresses when users are hitting the rate limit by making calls to the Cloudflare API.
In this article, you built a Node.js project deployed on DigitalOcean’s App Platform connected to your domain routed via Cloudflare. You protected your domain against rate limit misuse by configuring a Firewall Rule on Cloudflare. From here, you can modify the Firewall Rule to show JS Challenge or CAPTCHA instead of banning the user. The Cloudflare documentation details the process.
]]>Yarn is a package manager for Node.js that focuses on speed, security, and consistency. It was originally created to address some issues with the popular NPM package manager. Though the two package managers have since converged in terms of performance and features, Yarn remains popular, especially in the world of React development.
Some of the unique features of Yarn are:
In this tutorial you will install Yarn globally, add Yarn to a specific project, and learn some basic Yarn commands.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
Before installing and using the Yarn package manager, you will need to have Node.js installed. To see if you already have Node.js installed, type the following command into your local command line terminal:
- node -v
If you see a version number, such as v12.16.3
printed, you have Node.js installed. If you get a command not found
error (or similar phrasing), please install Node.js before continuing.
To install Node.js, follow our tutorial for Ubuntu, Debian, CentOS, or macOS.
Once you have Node.js installed, proceed to Step 1 to install the Yarn package manager.
Yarn has a unique way of installing and running itself in your JavaScript projects. First you install the yarn
command globally, then you use the global yarn
command to install a specific local version of Yarn into your project directory. This is necessary to ensure that everybody working on a project (and all of the project’s automated testing and deployment tooling) is running the exact same version of yarn
, to avoid inconsistent behaviors and results.
The Yarn maintainers recommend installing Yarn globally by using the NPM package manager, which is included by default with all Node.js installations. Use the -g
flag with npm install
to do this:
- sudo npm install -g yarn
After the package installs, have the yarn
command print its own version number. This will let you verify it was installed properly:
- yarn --version
Output1.22.11
Now that you have the yarn
command installed globally, you can use it to install Yarn into a specific JavaScript project.
If you are using Yarn to work with an existing Yarn-based project, you can skip this step. The project should already be set up with a local version of Yarn and all the configuration files necessary to use it.
If you are setting up a new project of your own, you’ll want to configure a project-specific version of Yarn now.
First, navigate to your project directory:
- cd ~/my-project
If you don’t have a project directory, you can make a new one with mkdir
and then move into it:
- mkdir my-project
- cd my-project
Now use the yarn set
command to set the version to berry
:
- yarn set version berry
This will download the current, actively developed version of Yarn – berry
– save it to a .yarn/releases/
directory in your project, and set up a .yarnrc.yml
configuration file as well:
OutputResolving berry to a url...
Downloading https://github.com/yarnpkg/berry/raw/master/packages/berry-cli/bin/berry.js...
Saving it into /home/sammy/my-project/.yarn/releases/yarn-berry.cjs...
Updating /home/sammy/my-project/.yarnrc.yml...
Done!
Now try the yarn --version
command again:
- yarn --version
Output3.0.0
You’ll see the version is 3.0.0
or higher. This is the latest release of Yarn.
Note: if you cd
out of your project directory and run yarn --version
again, you’ll once again get the global Yarn’s version number, 1.22.11
in this case. Every time you run yarn
, you are using the globally installed version of the command. The global yarn
command first checks to see if it’s in a Yarn project directory with a .yarnrc.yml
file, and if it is, it hands the command off to the project-specific version of Yarn configured in the project’s yarnPath
setting.
Your project is now set up with a project-specific version of Yarn. Next we’ll look at a few commonly used yarn
commands to get started with.
Yarn has many subcommands, but you only need a few to get started. Let’s look at the first subcommands you’ll want to use.
When starting out with any new tool, it’s useful to learn how to access its online help. In Yarn the --help
flag can be added to any command to get more information:
- yarn --help
This will print out overall help for the yarn
command. To get more specific information about a subcommand, add --help
after the subcommand:
- yarn install --help
This would print out details on how to use the yarn install
command.
If you’re starting a project from scratch, use the init
subcommand to create the Yarn-specific files you’ll need:
- yarn init
This will add a package.json
configuration file and a yarn.lock
file to your directory. The package.json
contains configuration and your list of module dependencies. The yarn.lock
file locks those dependencies to specific versions, making sure that the dependency tree is always consistent.
To download and install all the dependencies in an existing Yarn-based project, use the install
subcommand:
- yarn install
This will download and install the modules you need to get started.
Use the add
subcommand to add new dependencies to a project:
- yarn add package-name
This will download the module, install it, and update your package.json
and yarn.lock
files.
.gitignore
File for YarnYarn stores files in a .yarn
folder inside your project directory. Some of these files should be checked into version control and others should be ignored. The basic .gitignore
configuration for Yarn follows:
.yarn/*
!.yarn/patches
!.yarn/releases
!.yarn/plugins
!.yarn/sdks
!.yarn/versions
.pnp.*
This ignores the entire .yarn
directory, and then adds in some exceptions for important folders, including the releases
directory which contains your project-specific version of Yarn.
For more details on how to configure Git and Yarn, please refer to the official Yarn documentation on .gitignore
.
In this tutorial you installed Yarn and learned about a few yarn
subcommands. For more information on using Yarn, take a look at the official Yarn CLI documentation.
For more general Node.js and JavaScript help, please visit our Node.js and JavaScript tag pages, where you’ll find relevant tutorials, tech talks, and community Q&A.
]]>Node.js and MongoDB are a perfect pair when developing applications. The ability to switch between JavaScript objects and JSON makes development seamless. Harness the power of this combination by building a Node app from scratch using Express, and connecting to the database using Mongoose.
Express: a fast, unopinionated, minimalist web framework for Node.js Mongoose: an elegant MongoDB object modeling for Node.js MongoDB Node.JS driver: official MongoDB Node.js driver that allows Node.js applications to connect to MongoDB and work with data
]]>By leveraging JavaScript on both the frontend and the backend, development can be more consistent and full-stack web applications can be designed within the same development environment.
For more information on Node.js please refer to the following resources:
node-js
tag page with links to Node.js tutorials, tech talks, Q&A, and moreGraphQL is a specification and therefore language agnostic. When it comes GraphQL development with Node.js, there are various options available, including graphql-js
, express-graphql
, and apollo-server
. In this tutorial, you will set up a fully featured GraphQL server in Node.js with Apollo Server.
Since the launch of Apollo Server 2, creating a GraphQL server with Apollo Server has become more efficient, not to mention the other features that came with it.
For the purpose of this demonstration, you will build a GraphQL server for a recipe app.
To complete this tutorial, you’ll need:
This tutorial was verified with Node v14.4.0, npm
v6.14.5, apollo-server
v2.15.0, graphql
v15.1.0, sequelize
v5.21.13, and sqlite3
v4.2.0.
GraphQL is a declarative data fetching specification and query language for APIs. It was created by Facebook. GraphQL is an effective alternative to REST, as it was created to overcome some of the shortcomings of REST-like under and over fetching.
Unlike REST, GraphQL uses one endpoint. This means we make one request to the endpoint and we’ll get one response as JSON. This JSON response can contain as little or as much data as we want. Like REST, GraphQL can be operated over HTTP, though GraphQL is protocol agnostic.
A typical GraphQL server is comprised of schema and resolvers. A schema (or GraphQL schema) contains type definitions that would make up a GraphQL API. A type definition contains field(s), each with what it is expected to return. Each field is mapped to a function on the GraphQL server called a resolver. Resolvers contain the implementation logic and return data for a field. In other words, schemas contain type definitions, while resolvers contain the actual implementations.
We’ll start by setting up our database. We’ll be using SQLite for our database. Also, we’ll be using Sequelize, which is an ORM for Node.js, to interact with our database.
First, let’s create a new project:
- mkdir graphql-recipe-server
Navigate to the new project directory:
- cd graphql-recipe-server
Initialize a new project:
- npm init -y
Next, let’s install Sequelize:
- npm install sequelize sequelize-cli sqlite3
In addition to installing Sequelize, we are also installing the sqlite3
package for Node.js. To help us scaffold our project, we’ll be using the Sequelize CLI, which we are installing as well.
Let’s scaffold our project with the CLI:
- node_modules/.bin/sequelize init
This will create the following folders:
config
: contains a config file, which tells Sequelize how to connect with our database.models
: contains all models for our project, and also contains an index.js
file which integrates all the models together.migrations
: contains all migration files.seeders
: contains all seed files.For the purpose of this tutorial, we won’t be using any seeders. Open config/config.json
and replace it with the following content:
{
"development": {
"dialect": "sqlite",
"storage": "./database.sqlite"
}
}
We set the dialect
to sqlite
and set the storage
to point to a SQLite database file.
Next, we need to create the database file directly inside the project’s root directory:
- touch database.sqlite
Now the dependencies for your project are installed to use SQLite.
With the database setup out of the way, we can start creating the models for our project. Our recipe app will have two models: User
and Recipe
. We’ll be using the Sequelize CLI for this:
- node_modules/.bin/sequelize model:create --name User --attributes name:string,email:string,password:string
This is will create a user.js
file inside the models
directory and a corresponding migration file inside the migrations
directory.
Since we don’t want any fields on the User
model to be nullable, we need to explicitly define that. Open migrations/XXXXXXXXXXXXXX-create-user.js
and update the fields definitions as follows:
name: {
allowNull: false,
type: Sequelize.STRING
},
email: {
allowNull: false,
type: Sequelize.STRING
},
password: {
allowNull: false,
type: Sequelize.STRING
}
Then we’ll do the same in the User
model:
name: {
allowNull: false,
type: DataTypes.STRING
},
email: {
allowNull: false,
type: DataTypes.STRING
},
password: {
allowNull: false,
type: DataTypes.STRING
}
Next, let’s create the Recipe
model:
- node_modules/.bin/sequelize model:create --name Recipe --attributes title:string,ingredients:string,direction:string
Just as we did with the User
model, we’ll do the same for the Recipe
model. Open migrations/XXXXXXXXXXXXXX-create-recipe.js
and update the fields definitions as follows:
userId: {
allowNull: false,
type: Sequelize.INTEGER.UNSIGNED
},
title: {
allowNull: false,
type: Sequelize.STRING
},
ingredients: {
allowNull: false,
type: Sequelize.STRING
},
direction: {
allowNull: false,
type: Sequelize.STRING
},
You’ll notice we have an additional field: userId
, which would hold the ID of the user that created a recipe. More on this shortly.
Update the Recipe
model as well:
title: {
allowNull: false,
type: DataTypes.STRING
},
ingredients: {
allowNull: false,
type: DataTypes.STRING
},
direction: {
allowNull: false,
type: DataTypes.STRING
}
Let’s define the one-to-many relationship between the user and recipe models.
Open models/user.js
and update the User.associate
function as below:
User.associate = function(models) {
// associations can be defined here
User.hasMany(models.Recipe)
};
We need to also define the inverse of the relationship on the Recipe
model:
Recipe.associate = function(models) {
// associations can be defined here
Recipe.belongsTo(models.User, { foreignKey: 'userId' });
};
By default, Sequelize will use a camelcase name from the corresponding model name and its primary key as the foreign key. So in our case, it will expect the foreign key to be UserId
. Since we named the column differently, we need to explicitly define the foreignKey
on the association.
Now, we can run the migrations:
- node_modules/.bin/sequelize db:migrate
Now the setup for your models and migrations is complete.
As mentioned earlier, we’ll be using Apollo Server for building our GraphQL server. So, let’s install it:
- npm install apollo-server graphql bcryptjs
Apollo Server requires graphql
as a dependency, hence the need to install it as well. Also, we install bcryptjs
, which we’ll use to hash user passwords later on.
With those installed, create a src
directory, then within it, create an index.js
file and add the following code to it:
const { ApolloServer } = require('apollo-server');
const typeDefs = require('./schema');
const resolvers = require('./resolvers');
const models = require('../models');
const server = new ApolloServer({
typeDefs,
resolvers,
context: { models },
});
server
.listen()
.then(({ url }) => console.log('Server is running on localhost:4000'));
Here, we create a new instance of Apollo Server, passing to it our schema and resolvers (both of which we’ll create shortly). We also pass the models as the context to the Apollo Server. This will allow us to have access to the models from our resolvers.
Finally, we start the server.
GraphQL schema is used to define the functionality a GraphQL API would have. A GraphQL schema is comprised of types. A type can be for defining the structure of our domain-specific entity. In addition to defining types for our domain-specific entities, we can also define types for GraphQL operations, which will in turn translates to the functionality a GraphQL API will have. These operations are queries, mutations, and subscriptions. Queries are used to perform read operations (fetching of data) on a GraphQL server. Mutations on the other hand are used to perform write operations (inserting, updating, or deleting data) on a GraphQL server. Subscriptions are completely different from these two, as they are used to add real-time functionality to a GraphQL server.
We’ll be focusing only on queries and mutations in this tutorial.
Now that we understand what a GraphQL schema is, let’s create the schema for our app. Within the src
directory, create a schema.js
file and add the following code into it:
const { gql } = require('apollo-server');
const typeDefs = gql`
type User {
id: Int!
name: String!
email: String!
recipes: [Recipe!]!
}
type Recipe {
id: Int!
title: String!
ingredients: String!
direction: String!
user: User!
}
type Query {
user(id: Int!): User
allRecipes: [Recipe!]!
recipe(id: Int!): Recipe
}
type Mutation {
createUser(name: String!, email: String!, password: String!): User!
createRecipe(
userId: Int!
title: String!
ingredients: String!
direction: String!
): Recipe!
}
`;
module.exports = typeDefs;
First, we require
the gql
package from apollo-server
. Then we use it to define our schema. Ideally, we’d want our GraphQL schema to mirror our database schema as much as possible. So we define two types, User
and Recipe
, which corresponds to our models. On the User
type, in addition to defining the fields we have on the User
model, we also define a recipes
fields, which will be used to retrieve the user’s recipes. Same with the Recipe
type; we define a user
field, which will be used to get the user of a recipe.
Next, we define three queries: for fetching a single user, for fetching all recipes that have been created, and for fetching a single recipe respectively. Both the user
and recipe
queries can either return a user or recipe respectively or return null
if no corresponding match was found for the ID. The allRecipes
query will always return an array of recipes, which might be empty if no recipe as been created yet.
Note: The !
denotes a field is required, while []
denotes the field will return an array of items.
Lastly, we define mutations for creating a new user as well as creating a new recipe. Both mutations return back the created user and recipe respectively.
Resolvers define how the fields in a schema are executed. In other words, our schema is useless without resolvers. Create a resolvers.js
file inside the src
directory and add the following code in it:
const resolvers = {
Query: {
async user(root, { id }, { models }) {
return models.User.findById(id);
},
async allRecipes(root, args, { models }) {
return models.Recipe.findAll();
},
async recipe(root, { id }, { models }) {
return models.Recipe.findById(id);
},
},
};
module.exports = resolvers;
Note: Modern versions of sequelize
have deprecated findById
and replaced it with findByPk
. If you encounter errors like models.Recipe.findById is not a function
or models.User.findById is not a function
, you may need to update this snippet.
We start by creating the resolvers for our queries. Here, we are making use of the models to perform the necessary queries on the database and return the results.
Still inside src/resolvers.js
, let’s import bcryptjs
at the top of the file:
const bcrypt = require('bcryptjs');
Then add the following code immediately after the Query
object:
Mutation: {
async createUser(root, { name, email, password }, { models }) {
return models.User.create({
name,
email,
password: await bcrypt.hash(password, 10),
});
},
async createRecipe(
root,
{ userId, title, ingredients, direction },
{ models }
) {
return models.Recipe.create({ userId, title, ingredients, direction });
},
},
The createUser
mutation accepts the name, email, and password of a user, and creates a new record in the database with the supplied details. We make sure to hash the password using the bcrypt
package before persisting it to the database. It returns the newly created user. The createRecipe
mutation accepts the ID of the user that’s creating the recipe as well as the details for the recipe itself, persists them to the database, and returns the newly created recipe.
To wrap up with the resolvers, let’s define how we want our custom fields (recipes
on the User
and user
on Recipe
) to be resolved. Add the following code inside src/resolvers.js
just immediately after the Mutation
object:
User: {
async recipes(user) {
return user.getRecipes();
},
},
Recipe: {
async user(recipe) {
return recipe.getUser();
},
},
These use the methods, getRecipes()
and getUser()
, which are made available on our models by Sequelize due to the relationships we defined.
It’s time to test our GraphQL server out. First, we need to start the server with:
- node src/index.js
This will be running on localhost:4000
, and we will see GraphQL Playground running if we access it.
Let’s try creating a new user:
# create a new user
mutation{
createUser(
name: "John Doe",
email: "johndoe@example.com",
password: "password"
)
{
id,
name,
email
}
}
This will produce the following result:
Output{
"data": {
"createUser": {
"id": 1,
"name": "John Doe",
"email": "johndoe@example.com"
}
}
}
Let’s try creating a new recipe and associate it with the user that was created:
# create a new recipe
mutation {
createRecipe(
userId: 1
title: "Salty and Peppery"
ingredients: "Salt, Pepper"
direction: "Add salt, Add pepper"
) {
id
title
ingredients
direction
user {
id
name
email
}
}
}
This will produce the following result:
Output{
"data": {
"createRecipe": {
"id": 1,
"title": "Salty and Peppery",
"ingredients": "Salt, Pepper",
"direction": "Add salt, Add pepper",
"user": {
"id": 1,
"name": "John Doe",
"email": "johndoe@example.com"
}
}
}
}
Other queries you can perform here include: user(id: 1)
, recipe(id: 1)
, and allRecipes
.
In this tutorial, we looked at how to create a GraphQL server in Node.js with Apollo Server. We also saw how to integrate a database with a GraphQL server using Sequelize.
]]>In the previous tutorial, you built the backend server for an invoicing application. In this tutorial, you will build the part of the application that users will interact with, known as the user interface.
Note: This is Part 2 of a 3-part series. The first tutorial is How To Build a Lightweight Invoicing App with Node: Database and API. The third tutorial is How To Build a Lightweight Invoicing App with Vue and Node: JWT Authentication and Sending Invoices.
The user interface in this tutorial will be built with Vue and allow users to log in to view and create invoices.
To complete this tutorial, you will need:
This tutorial was verified with Node v16.1.0, npm
v7.12.1, Vue v2.6.11, Vue Router v3.2.0, axios
v0.21.1, and Bootstrap v5.0.1.
You can use @vue/cli
to create a new Vue.js project.
Note: You should be able to place this new project directory next to invoicing-app
directory you created in the previous tutorial. This introduces a common practice of separating server
and client
.
In your terminal window, use the following command:
- npx @vue/cli create --inlinePreset='{ "useConfigFiles": false, "plugins": { "@vue/cli-plugin-babel": {}, "@vue/cli-plugin-eslint": { "config": "base", "lintOn": ["save"] } }, "router": true, "routerHistoryMode": true }' invoicing-app-frontend
This will use the inline preset configuration for creating a Vue.js Project with Vue Router.
Navigate to the newly created project directory:
- cd invoicing-app-frontend
Start the project to verify that there are no errors.
- npm run serve
If you visit the local app (typically at localhost:8080
) in your web browser, you will see a "Welcome to Your Vue.js App"
message.
This creates a sample Vue
project that we’ll build upon in this article.
For the frontend of this invoicing application, a lot of requests are going to be made to the backend server.
To achieve this, we’ll make use of axios. To install axios
, run the command in your project directory:
- npm install axios@0.21.1
To allow some default styling in the application, you will make use of Bootstrap
.
First, open the public/index.html
file in your code editor.
Add the CDN-hosted CSS file for Bootstrap to the head
of the document:
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-+0n0xVW2eSR5OomGNYDnhzAbDsOXxcvSN1TPprVMTNDbiYZCxYbOOl7+AMvyTG2x" crossorigin="anonymous">
Add the CDN-hosted JavaScript files for Popper and Bootstrap to the head
of the document:
<script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.9.2/dist/umd/popper.min.js" integrity="sha384-IQsoLXl5PILFhosVNubq5LC7Qb9DXgDA9i+tQ8Zj3iwWAwPtgFTxbJ8NT4GN1R8p" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.1/dist/js/bootstrap.min.js" integrity="sha384-Atwg2Pkwv9vp0ygtn1JAojH0nYbwNJLPhwyoVbhoPwBhjQPR5VtM2+xf0Uwh9KtT" crossorigin="anonymous"></script>
You can replace the contents of App.vue
with the following lines of code:
<template>
<div id="app">
<router-view/>
</div>
</template>
And you can ignore or delete the src/views/Home.vue
, src/views/About.vue
, and src/components/HelloWorld.vue
files that were automatically generated.
At this point, you have a new Vue project with Axios and Bootstrap.
For this application, you are going to have two major routes:
/
to render the login page/dashboard
to render the user dashboardTo configure these routes, open the src/router/index.js
and update it with the following lines of code:
import Vue from 'vue'
import VueRouter from 'vue-router'
import SignUp from '@/components/SignUp'
import Dashboard from '@/components/Dashboard'
Vue.use(VueRouter)
const routes = [
{
path: '/',
name: 'SignUp',
component: SignUp
},
{
path: '/dashboard',
name: 'Dashboard',
component: Dashboard
}
]
const router = new VueRouter({
mode: 'history',
base: process.env.BASE_URL,
routes
})
export default router
This specifies the components that should be displayed to the user when they visit your application.
Components allow the frontend of your application to be more modular and reusable. This application will have the following components:
The Header
component displays the name of the application and Navigation
if a user is signed in.
Create a Header.vue
file in the src/components
directory. The component file has the following lines of code:
<template>
<nav class="navbar navbar-light bg-light">
<div class="navbar-brand m-0 p-3 h1 align-self-start">{{title}}</div>
<template v-if="user != null">
<Navigation v-bind:name="user.name" v-bind:company="user.company_name"/>
</template>
</nav>
</template>
<script>
import Navigation from './Navigation'
export default {
name: "Header",
props : ["user"],
components: {
Navigation
},
data() {
return {
title: "Invoicing App",
};
}
};
</script>
The Header component has a single prop
called user
. This prop
will be passed by any component that will use the header component. In the template for the header, the Navigation
component is imported and conditional rendering is used to determine if the Navigation
should be displayed or not.
The Navigation
component is the sidebar that will house the links of different actions.
Create a new Navigation.vue
component in the /src/components
directory. The component has the following template:
<template>
<div class="flex-grow-1">
<div class="navbar navbar-expand-lg">
<ul class="navbar-nav flex-grow-1 flex-row">
<li class="nav-item">
<a class="nav-link" v-on:click="setActive('create')">Create Invoice</a>
</li>
<li class="nav-item">
<a class="nav-link" v-on:click="setActive('view')">View Invoices</a>
</li>
</ul>
</div>
<div class="navbar-text"><em>Company: {{ company }}</em></div>
<div class="navbar-text h3">Welcome, {{ name }}</div>
</div>
</template>
...
Next, open the Navigation.vue
file in your code editor and add the following lines of code:
...
<script>
export default {
name: "Navigation",
props: ["name", "company"],
methods: {
setActive(option) {
this.$parent.$parent.isactive = option;
},
}
};
</script>
The component is created with two props
: the name of the user and the name of the company. The setActive
method will update the component calling the parent of the Navigation
component, in this case, Dashboard
, when a user clicks on a navigation link.
The SignUp
component houses the sign up and sign in form. Create a new file in /src/components
directory.
First, create the component:
<template>
<div class="container">
<Header/>
<ul class="nav nav-tabs" role="tablist">
<li class="nav-item" role="presentation">
<button class="nav-link active" id="login-tab" data-bs-toggle="tab" data-bs-target="#login" type="button" role="tab" aria-controls="login" aria-selected="true">Login</button>
</li>
<li class="nav-item" role="presentation">
<button class="nav-link" id="register-tab" data-bs-toggle="tab" data-bs-target="#register" type="button" role="tab" aria-controls="register" aria-selected="false">Register</button>
</li>
</ul>
<div class="tab-content p-3">
...
</div>
</div>
</template>
<script>
import axios from "axios"
import Header from "./Header"
export default {
name: "SignUp",
components: {
Header
},
data() {
return {
model: {
name: "",
email: "",
password: "",
c_password: "",
company_name: ""
},
loading: "",
status: ""
};
},
methods: {
...
}
}
</script>
The Header
component is imported and the data properties of the components are also specified.
Next, create the methods to handle what happens when data is submitted:
...
methods: {
validate() {
// checks to ensure passwords match
if (this.model.password != this.model.c_password) {
return false;
}
return true;
},
...
}
...
The validate()
method performs checks to make sure the data sent by the user meets our requirements.
...
methods: {
...
register() {
const formData = new FormData();
let valid = this.validate();
if (valid) {
formData.append("name", this.model.name);
formData.append("email", this.model.email);
formData.append("company_name", this.model.company_name);
formData.append("password", this.model.password);
this.loading = "Registering you, please wait";
// Post to server
axios.post("http://localhost:3128/register", formData).then(res => {
// Post a status message
this.loading = "";
if (res.data.status == true) {
// now send the user to the next route
this.$router.push({
name: "Dashboard",
params: { user: res.data.user }
});
} else {
this.status = res.data.message;
}
});
} else {
alert("Passwords do not match");
}
},
...
}
...
The register
method of the component handles the action when a user tries to register a new account. First, the data is validated using the validate
method. Then if all criteria are met, the data is prepared for submission using the formData
.
We’ve also defined the loading
property of the component to let the user know when their form is being processed. Finally, a POST
request is sent to the backend server using axios
. When a response is received from the server with a status of true
, the user is directed to the dashboard. Otherwise, an error message is displayed to the user.
...
methods: {
...
login() {
const formData = new FormData();
formData.append("email", this.model.email);
formData.append("password", this.model.password);
this.loading = "Logging In";
// Post to server
axios.post("http://localhost:3128/login", formData).then(res => {
// Post a status message
this.loading = "";
if (res.data.status == true) {
// now send the user to the next route
this.$router.push({
name: "Dashboard",
params: { user: res.data.user }
});
} else {
this.status = res.data.message;
}
});
}
}
...
The login
method is similar to the register
method. The data is prepared and sent over to the backend server to authenticate the user. If the user exists and the details match, the user is directed to their dashboard.
Now, take a look at the template for registration:
<template>
<div class="container">
...
<div class="tab-content p-3">
<div id="login" class="tab-pane fade show active" role="tabpanel" aria-labelledby="login-tab">
<div class="row">
<div class="col-md-12">
<form @submit.prevent="login">
<div class="form-group mb-3">
<label for="login-email" class="label-form">Email:</label>
<input id="login-email" type="email" required class="form-control" placeholder="example@example.com" v-model="model.email">
</div>
<div class="form-group mb-3">
<label for="login-password" class="label-form">Password:</label>
<input id="login-password" type="password" required class="form-control" placeholder="Password" v-model="model.password">
</div>
<div class="form-group">
<button class="btn btn-primary">Log In</button>
{{ loading }}
{{ status }}
</div>
</form>
</div>
</div>
</div>
...
</div>
</div>
</template>
The login in form is shown above and the input fields are linked to the respective data properties specified when the components were created. When the submit button of the form is clicked, the login
method of the component is called.
Usually, when the submit button of a form is clicked, the form is submitted via a GET
or POST
request. Instead of using that, we added <form @submit.prevent="login">
when creating the form to override the default behavior and specify that the login function should be called.
The registration form also looks like this:
<template>
<div class="container">
...
<div class="tab-content p-3">
...
<div id="register" class="tab-pane fade" role="tabpanel" aria-labelledby="register-tab">
<div class="row">
<div class="col-md-12">
<form @submit.prevent="register">
<div class="form-group mb-3">
<label for="register-name" class="label-form">Name:</label>
<input id="register-name" type="text" required class="form-control" placeholder="Full Name" v-model="model.name">
</div>
<div class="form-group mb-3">
<label for="register-email" class="label-form">Email:</label>
<input id="register-email" type="email" required class="form-control" placeholder="example@example.com" v-model="model.email">
</div>
<div class="form-group mb-3">
<label for="register-company" class="label-form">Company Name:</label>
<input id="register-company" type="text" required class="form-control" placeholder="Company Name" v-model="model.company_name">
</div>
<div class="form-group mb-3">
<label for="register-password" class="label-form">Password:</label>
<input id="register-password" type="password" required class="form-control" placeholder="Password" v-model="model.password">
</div>
<div class="form-group mb-3">
<label for="register-confirm" class="label-form">Confirm Password:</label>
<input id="register-confirm" type="password" required class="form-control" placeholder="Confirm Password" v-model="model.c_password">
</div>
<div class="form-group mb-3">
<button class="btn btn-primary">Register</button>
{{ loading }}
{{ status }}
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</template>
The @submit.prevent
is also used here to call the register
method when the submit button is clicked.
Now, run your development server using this command:
- npm run serve
Visit localhost:8080
in your browser to observe the newly created login and registration page.
Note: When experimenting with the user interface, you will need to have the invoicing-app
server running. Furthermore, you may encounter a CORS (cross-origin resource sharing) error that you may need to address by setting Access-Control-Allow-Origin
headers.
Experiment with logging in and registering new users.
The Dashboard component will be displayed when the user gets routed to the /dashboard
route. It displays the Header
and the CreateInvoice
component by default.
Create the Dashboard.vue
file in the src/components
directory. The component has the following lines of code:
<template>
<div class="container">
<Header v-bind:user="user"/>
<template v-if="this.isactive == 'create'">
<CreateInvoice />
</template>
<template v-else>
<ViewInvoices />
</template>
</div>
</template>
...
Below the template, add the following lines of code:
...
<script>
import Header from "./Header";
import CreateInvoice from "./CreateInvoice";
import ViewInvoices from "./ViewInvoices";
export default {
name: "Dashboard",
components: {
Header,
CreateInvoice,
ViewInvoices,
},
data() {
return {
isactive: 'create',
title: "Invoicing App",
user : (this.$route.params.user) ? this.$route.params.user : null
};
}
};
</script>
The CreateInvoice
component contains the form needed to create a new invoice. Create a new file in the src/components
directory:
Edit the CreateInvoice
component to look like this:
<template>
<div class="container">
<div class="tab-pane p-3 fade show active">
<div class="row">
<div class="col-md-12">
<h3>Enter details below to create invoice</h3>
<form @submit.prevent="onSubmit">
<div class="form-group mb-3">
<label for="create-invoice-name" class="form-label">Invoice Name:</label>
<input id="create-invoice-name" type="text" required class="form-control" placeholder="Invoice Name" v-model="invoice.name">
</div>
<div class="form-group mb-3">
Invoice Price: <span>${{ invoice.total_price }}</span>
</div>
...
</form>
</div>
</div>
</div>
</div>
</template>
This creates a form that accepts the name of the invoice and displays the total price of the invoice. The total price is obtained by summing up the prices of individual transactions for the invoice.
Let’s take a look at how transactions are added to the invoice:
...
<form @submit.prevent="onSubmit">
...
<hr />
<h3>Transactions </h3>
<div class="form-group">
<button type="button" class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#transactionModal">Add Transaction</button>
<!-- Modal -->
<div class="modal fade" id="transactionModal" tabindex="-1" aria-labelledby="transactionModalLabel" aria-hidden="true">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="exampleModalLabel">Add Transaction</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="form-group mb-3">
<label for="txn_name_modal" class="form-label">Transaction name:</label>
<input id="txn_name_modal" type="text" class="form-control">
</div>
<div class="form-group mb-3">
<label for="txn_price_modal" class="form-label">Price ($):</label>
<input id="txn_price_modal" type="numeric" class="form-control">
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Discard Transaction</button>
<button type="button" class="btn btn-primary" data-bs-dismiss="modal" v-on:click="saveTransaction()">Save Transaction</button>
</div>
</div>
</div>
</div>
</div>
...
</form>
...
A button is displayed for the user to add a new transaction. When the Add Transaction button is clicked, a modal is shown to the user to enter the details of the transaction. When the Save Transaction button is clicked, a method adds it to the existing transactions.
...
<form @submit.prevent="onSubmit">
...
<div class="col-md-12">
<table class="table">
<thead>
<tr>
<th scope="col">#</th>
<th scope="col">Transaction Name</th>
<th scope="col">Price ($)</th>
<th scope="col"></th>
</tr>
</thead>
<tbody>
<template v-for="txn in transactions">
<tr :key="txn.id">
<th>{{ txn.id }}</th>
<td>{{ txn.name }}</td>
<td>{{ txn.price }} </td>
<td><button type="button" class="btn btn-danger" v-on:click="deleteTransaction(txn.id)">Delete</button></td>
</tr>
</template>
</tbody>
</table>
</div>
<div class="form-group">
<button class="btn btn-primary">Create Invoice</button>
{{ loading }}
{{ status }}
</div>
</form>
...
The existing transactions are displayed in a tabular format. When the Delete button is clicked, the transaction in question is deleted from the transaction list and the Invoice Price
is recalculated. Finally, the Create Invoice
button triggers a function that then prepares the data and sends it to the backend server for the creation of the invoice.
Let’s also take a look at the component structure of the Create Invoice
component:
...
<script>
import axios from "axios";
export default {
name: "CreateInvoice",
data() {
return {
invoice: {
name: "",
total_price: 0
},
transactions: [],
nextTxnId: 1,
loading: "",
status: ""
};
},
methods: {
...
}
};
</script>
First, you defined the data properties for the component. The component will have an invoice object containing the invoice name
and total_price
. It’ll also have an array of transactions
with the nextTxnId
index. This will keep track of the transactions and variables to send status updates to the user.
...
methods: {
saveTransaction() {
// append data to the arrays
let name = document.getElementById("txn_name_modal").value;
let price = document.getElementById("txn_price_modal").value;
if (name.length != 0 && price > 0) {
this.transactions.push({
id: this.nextTxnId,
name: name,
price: price
});
this.nextTxnId++;
this.calcTotal();
// clear their values
document.getElementById("txn_name_modal").value = "";
document.getElementById("txn_price_modal").value = "";
}
},
...
}
...
The methods for the CreateInvoice
component are also defined here. The saveTransaction()
method takes the values in the transaction form modal and then adds them to the transaction list. The deleteTransaction()
method deletes an existing transaction object from the list of transactions while the calcTotal()
method recalculates the total invoice price when a new transaction is added or deleted.
...
methods: {
...
deleteTransaction(id) {
let newList = this.transactions.filter(function(el) {
return el.id !== id;
});
this.nextTxnId--;
this.transactions = newList;
this.calcTotal();
},
calcTotal() {
let total = 0;
this.transactions.forEach(element => {
total += parseInt(element.price, 10);
});
this.invoice.total_price = total;
},
...
}
...
Finally, the onSubmit()
method will submit the form to the backend server. In the method, formData
and axios
are used to send the requests. The transaction array containing the transaction objects is split into two different arrays. One array holds the transaction names and the other holds the transaction prices. The server then attempts to process the request and send back a response to the user.
...
methods: {
...
onSubmit() {
const formData = new FormData();
this.transactions.forEach(element => {
formData.append("txn_names[]", element.name);
formData.append("txn_prices[]", element.price)
});
formData.append("name", this.invoice.name);
formData.append("user_id", this.$route.params.user.id);
this.loading = "Creating Invoice, please wait ...";
// Post to server
axios.post("http://localhost:3128/invoice", formData).then(res => {
// Post a status message
this.loading = "";
if (res.data.status == true) {
this.status = res.data.message;
} else {
this.status = res.data.message;
}
});
}
}
...
When you go back to the application on localhost:8080
and sign in, you will get redirected to a dashboard.
Now that you can create invoices, the next step is to create a visual picture of invoices and their statuses. To do this, create a ViewInvoices.vue
file in the src/components
directory of the application.
Edit the file to look like this:
<template>
<div>
<div class="tab-pane p-3 fade show active">
<div class="row">
<div class="col-md-12">
<h3>Here is a list of your invoices</h3>
<table class="table">
<thead>
<tr>
<th scope="col">Invoice #</th>
<th scope="col">Invoice Name</th>
<th scope="col">Status</th>
<th scope="col"></th>
</tr>
</thead>
<tbody>
<template v-for="invoice in invoices">
<tr :key="invoice.id">
<th scope="row">{{ invoice.id }}</th>
<td>{{ invoice.name }}</td>
<td v-if="invoice.paid == 0">Unpaid</td>
<td v-else>Paid</td>
<td><a href="#" class="btn btn-success">To Invoice</a></td>
</tr>
</template>
</tbody>
</table>
</div>
</div>
</div>
</div>
</template>
...
The template above contains a table displaying the invoices a user has created. It also has a button that takes the user to a single invoice page when an invoice is clicked.
...
<script>
import axios from "axios";
export default {
name: "ViewInvoices",
data() {
return {
invoices: [],
user: this.$route.params.user
};
},
mounted() {
axios
.get(`http://localhost:3128/invoice/user/${this.user.id}`)
.then(res => {
if (res.data.status == true) {
this.invoices = res.data.invoices;
}
});
}
};
</script>
The ViewInvoices
component has its data properties as an array of invoices and the user details. The user details are obtained from the route parameters. When the component is mounted
, a GET
request is made to the backend server to fetch the list of invoices created by the user which are then displayed using the template that was shown earlier.
When you go to the /dashboard
, click the View Invoices option on the Navigation
to see a listing of invoices and payment status.
In this part of the series, you configured the user interface of the invoicing application using concepts from Vue.
Continue your learning with How To Build a Lightweight Invoicing App with Vue and Node: JWT Authentication and Sending Invoices.
]]>An invoice is a document of goods and services provided that a business can present to customers and clients.
A digital invoicing tool will need to track clients, record services and prices, update the status of paid invoices, and provide an interface for displaying invoices. This will require CRUD (Create, Read, Update, Delete), databases, and routing.
Note: This is Part 1 of a 3-part series. The second tutorial is How To Build a Lightweight Invoicing App with Node: User Interface. The third tutorial is How To Build a Lightweight Invoicing App with Vue and Node: JWT Authentication and Sending Invoices.
In this tutorial, you will build an invoicing application using Vue and NodeJS. This application will perform functions such as creating, sending, editing, and deleting an invoice.
To complete this tutorial, you will need:
Note: SQLite currently comes pre-installed on macOS and Mac OS X by default.
This tutorial was verified with Node v16.1.0, npm
v7.12.1, and SQLite v3.32.3.
Now that we have the requirements all set, the next thing to do is to create the backend server for the application. The backend server will maintain the database connection.
Start by creating a directory for the new project:
- mkdir invoicing-app
Navigate to the newly created project directory:
- cd invoicing-app
Then initialize it as a Node project:
- npm init -y
For the server to function appropriately, there are some Node packages that need to be installed. You can install them by running this command:
- npm install bcrypt@5.0.1 bluebird@3.7.2 cors@2.8.5 express@4.17.1 lodash@4.17.21 multer@1.4.2 sqlite3@5.0.2^> umzug@2.3.0<^>
That command installs the following packages:
bcrypt
to hash user passwordsbluebird
to use Promises when writing migrationscors
for cross-origin resource sharingexpress
to power our web applicationlodash
for utility methodsmulter
to handle incoming form requestssqlite3
to create and maintain the databaseumzug
as a task runner to run our database migrationsNote: Since the original publication, this tutorial was updated to include lodash
for isEmpty()
. The middleware library for handling multipart/form-data
was changed from connect-multiparty
to multer
.
Create a server.js
file that will house the application logic. In the server.js
file, import the necessary modules and create an Express app:
const express = require('express');
const cors = require('cors');
const sqlite3 = require('sqlite3').verbose();
const PORT = process.env.PORT || 3128;
const app = express();
app.use(express.urlencoded({extended: false}));
app.use(express.json());
app.use(cors());
// ...
Create a /
route to test that the server works:
// ...
app.get('/', function(req, res) {
res.send('Welcome to Invoicing App.');
});
app.listen()
tells the server the port to listen to for incoming routes:
// ...
app.listen(PORT, function() {
console.log(`App running on localhost:${PORT}.`);
});
To start the server, run the following in your project directory:
- node server
Your application will now begin to listen to incoming requests.
For an invoicing application, a database is needed to store the existing invoices. SQLite is going to be the database client of choice for this application.
Start by creating a database
folder:
- mkdir database
Run the sqlite3
client and create an InvoicingApp.db
file for your database in this new directory:
- sqlite3 database/InvoicingApp.db
Now that the database has been selected, next thing is to create the needed tables.
This application will use three tables:
id
, name
, email
, company_name
, password
)id
, name
, paid
, user_id
)name
, price
, invoice_id
)Since the necessary tables have been identified, the next step is to run the queries to create the tables.
Migrations are used to keep track of changes in a database as the application grows. To do this, create a migrations
folder in the database
directory.
- mkdir database/migrations
This will be the location of all the migration files.
Now, create a 1.0.js
file in the migrations
folder. This naming convention is to keep track of the newest changes.
In the 1.0.js
file, you first import the node modules:
"use strict";
const path = require('path');
const Promise = require('bluebird');
const sqlite3 = require('sqlite3');
// ...
Then, export an up
function that will be executed when the migration file is run and a down
function to reverse the changes to the database.
// ...
module.exports = {
up: function() {
return new Promise(function(resolve, reject) {
let db = new sqlite3.Database('./database/InvoicingApp.db');
db.run(`PRAGMA foreign_keys = ON`);
// ...
In the up
function, the connection is first made to the database. Then the foreign keys are enabled on the sqlite
database. In SQLite, foreign keys are disabled by default to allow for backwards compatibility, so the foreign keys have to be enabled on every connection.
Next, specify the queries to create the tables:
// ...
db.serialize(function() {
db.run(`CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT,
email TEXT,
company_name TEXT,
password TEXT
)`);
db.run(`CREATE TABLE invoices (
id INTEGER PRIMARY KEY,
name TEXT,
user_id INTEGER,
paid NUMERIC,
FOREIGN KEY(user_id) REFERENCES users(id)
)`);
db.run(`CREATE TABLE transactions (
id INTEGER PRIMARY KEY,
name TEXT,
price INTEGER,
invoice_id INTEGER,
FOREIGN KEY(invoice_id) REFERENCES invoices(id)
)`);
});
db.close();
});
}
}
The serialize()
function is used to specify that the queries will be run sequentially and not simultaneously.
Once the migration files have been created, the next step is running them to make the changes in the database. To do this, create a scripts
folder from the root of your application:
- mkdir scripts
Then create a file called migrate.js
in this new directory. And add the following to the migrate.js
file:
const path = require('path');
const Umzug = require('umzug');
let umzug = new Umzug({
logging: function() {
console.log.apply(null, arguments);
},
migrations: {
path: './database/migrations',
pattern: /\.js$/
},
upName: 'up'
});
// ...
First, the needed node modules are imported. Then a new umzug
object is created with the configurations. The path
and pattern
of the migrations scripts are also specified. To learn more about the configurations, reference the umzug
README.
To also give some verbose feedback, create a function to log events as shown below and then finally execute the up
function to run the database queries specified in the migrations folder:
// ...
function logUmzugEvent(eventName) {
return function(name, migration) {
console.log(`${name} ${eventName}`);
};
}
// using event listeners to log events
umzug.on('migrating', logUmzugEvent('migrating'));
umzug.on('migrated', logUmzugEvent('migrated'));
umzug.on('reverting', logUmzugEvent('reverting'));
umzug.on('reverted', logUmzugEvent('reverted'));
// this will run your migrations
umzug.up().then(console.log('all migrations done'));
Now, to execute the script, go to your terminal and in the root directory of your application, run:
- node scripts/migrate.js
You will see output similar to the following:
Outputall migrations done
== 1.0: migrating =======
1.0 migrating
At this point, running the migrate.js
script has applied the 1.0.js
configuration to InvoicingApp.db
.
Now that the database is adequately set up, the next thing is to go back to the server.js
file and create the application routes. For this application, the following routes will be made available:
URL | METHOD | FUNCTION |
---|---|---|
/register |
POST |
To register a new user |
/login |
POST |
To log in an existing user |
/invoice |
POST |
To create a new invoice |
/invoice/user/{user_id} |
GET |
To fetch all the invoices for a user |
/invoice/user/{user_id}/{invoice_id} |
GET |
To fetch a certain invoice |
/invoice/send |
POST |
To send invoice to client |
/register
To register a new user, a POST request will be made to the /register
route of your server.
Revisit server.js
and add the following lines of code:
// ...
const _ = require('lodash');
const multer = require('multer');
const upload = multer();
const bcrypt = require('bcrypt');
const saltRounds = 10;
// POST /register - begin
app.post('/register', upload.none(), function(req, res) {
// check to make sure none of the fields are empty
if (
_.isEmpty(req.body.name)
|| _.isEmpty(req.body.email)
|| _.isEmpty(req.body.company_name)
|| _.isEmpty(req.body.password)
) {
return res.json({
"status": false,
"message": "All fields are required."
});
}
// any other intended checks
// ...
A check is made to see if any of the fields are empty and if the data sent matches all the specifications. If an error occurs, an error message is sent to the user as a response. If not, the password is hashed and the data is then stored in the database and a response is sent to the user informing them that they are registered.
// ...
bcrypt.hash(req.body.password, saltRounds, function(err, hash) {
let db = new sqlite3.Database('./database/InvoicingApp.db');
let sql = `INSERT INTO
users(
name,
email,
company_name,
password
)
VALUES(
'${req.body.name}',
'${req.body.email}',
'${req.body.company_name}',
'${hash}'
)`;
db.run(sql, function(err) {
if (err) {
throw err;
} else {
return res.json({
"status": true,
"message": "User Created."
});
}
});
db.close();
});
});
// POST /register - end
Now, if we use a tool like Postman to send a POST request to /register
with name
, email
, company_name
, and password
, it will create a new user:
Key | Value |
---|---|
name |
Test User |
email |
example@example.com |
company_name |
Test Company |
password |
password |
We can use a query and display the Users
table to verify the user creation:
- select * from users;
The database now contains a newly created user:
Output1|Test User|example@example.com|Test Company|[hashed password]
Your /register
route is now verified.
/login
If an existing user tries to log in to the system using the /login
route, they need to provide their email address and password. Once they do that, the route handles the request as follows:
// ...
// POST /login - begin
app.post('/login', upload.none(), function(req, res) {
let db = new sqlite3.Database('./database/InvoicingApp.db');
let sql = `SELECT * from users where email='${req.body.email}'`;
db.all(sql, [], (err, rows) => {
if (err) {
throw err;
}
db.close();
if (rows.length == 0) {
return res.json({
"status": false,
"message": "Sorry, wrong email."
});
}
// ...
A query is made to the database to fetch the record of the user with a particular email. If the result returns an empty array, then it means that the user doesn’t exist and a response is sent informing the user of the error.
If the database query returns user data, a further check is made to see if the password entered matches that password in the database. If it does, then a response is sent with the user data.
// ...
let user = rows[0];
let authenticated = bcrypt.compareSync(req.body.password, user.password);
delete user.password;
if (authenticated) {
return res.json({
"status": true,
"user": user
});
}
return res.json({
"status": false,
"message": "Wrong password. Please retry."
});
});
});
// POST /login - end
// ...
When the route is tested, you will receive either a successful or failed result.
Now, if we use a tool like Postman to send a POST request to /login
with email
and password
, it will send back a response.
Key | Value |
---|---|
email |
example@example.com |
password |
password |
Since this user exists in the database, we get the following response:
Output{
"status": true,
"user": {
"id": 1,
"name": "Test User",
"email": "example@example.com",
"company_name": "Test Company"
}
}
Your /login
route is now verified.
/invoice
The /invoice
route handles the creation of an invoice. Data passed to the route will include the user ID, name of the invoice, and invoice status. It will also include the singular transactions to make up the invoice.
The server handles the request as follows:
// ...
// POST /invoice - begin
app.post('/invoice', upload.none(), function(req, res) {
// validate data
if (_.isEmpty(req.body.name)) {
return res.json({
"status": false,
"message": "Invoice needs a name."
});
}
// perform other checks
// ...
First, the data sent to the server is validated. Then a connection is made to the database for the subsequent queries.
// ...
// create invoice
let db = new sqlite3.Database('./database/InvoicingApp.db');
let sql = `INSERT INTO invoices(
name,
user_id,
paid
)
VALUES(
'${req.body.name}',
'${req.body.user_id}',
0
)`;
// ...
The INSERT
query needed to create the invoice is written and then executed. Afterward, the singular transactions are inserted into the transactions
table with the invoice_id
as a foreign key to reference them.
// ...
db.serialize(function() {
db.run(sql, function(err) {
if (err) {
throw err;
}
let invoice_id = this.lastID;
for (let i = 0; i < req.body.txn_names.length; i++) {
let query = `INSERT INTO
transactions(
name,
price,
invoice_id
) VALUES(
'${req.body.txn_names[i]}',
'${req.body.txn_prices[i]}',
'${invoice_id}'
)`;
db.run(query);
}
return res.json({
"status": true,
"message": "Invoice created."
});
});
});
});
// POST /invoice - end
// ...
Now, if we use a tool like Postman to send a POST request to /invoice
with name
, user_id
, txn_names
, and txn_prices
, it will create a new invoice and record the transactions:
Key | Value |
---|---|
name |
Test Invoice |
user_id |
1 |
txn_names |
iPhone |
txn_prices |
600 |
txt_names |
MacBook |
txn_prices |
1700 |
Then, check the Invoices table:
- select * from invoices;
Observe the following result:
Output1|Test Invoice|1|0
Run the following command:
- select * from transactions;
Observe the following result:
Output1|iPhone|600|1
2|Macbook|1700|1
Your /invoice
route is now verified.
/invoice/user/{user_id}
Now, when a user wants to see all the created invoices, the client will make a GET
request to the /invoice/user/:id
route. The user_id
is passed as a route parameter. The request is handled as follows:
// ...
// GET /invoice/user/:user_id - begin
app.get('/invoice/user/:user_id', upload.none(), function(req, res) {
let db = new sqlite3.Database('./database/InvoicingApp.db');
let sql = `SELECT * FROM invoices WHERE user_id='${req.params.user_id}' ORDER BY invoices.id`;
db.all(sql, [], (err, rows) => {
if (err) {
throw err;
}
return res.json({
"status": true,
"invoices": rows
});
});
});
// GET /invoice/user/:user_id - end
// ...
A query is run to fetch all the invoices and the transactions related to the invoice belonging to a particular user.
Consider a request for all the invoices for a user:
localhost:3128/invoice/user/1
It will respond with the following data:
Output{"status":true,"invoices":[{"id":1,"name":"Test Invoice","user_id":1,"paid":0}]}
Your /invoice/user/:user_id
route is now verified.
/invoice/user/{user_id}/{invoice_id}
To fetch a specific invoice, a GET
request is made with the user_id
and invoice_id
to the /invoice/user/{user_id}/{invoice_id}
route. The request is handled as follows:
// ...
// GET /invoice/user/:user_id/:invoice_id - begin
app.get('/invoice/user/:user_id/:invoice_id', upload.none(), function(req, res) {
let db = new sqlite3.Database('./database/InvoicingApp.db');
let sql = `SELECT * FROM invoices LEFT JOIN transactions ON invoices.id=transactions.invoice_id WHERE user_id='${req.params.user_id}' AND invoice_id='${req.params.invoice_id}' ORDER BY transactions.id`;
db.all(sql, [], (err, rows) => {
if (err) {
throw err;
}
return res.json({
"status": true,
"transactions": rows
});
});
});
// GET /invoice/user/:user_id/:invoice_id - end
// set application port
// ...
A query is run to fetch a single invoice and the transactions related to the invoice belonging to the user.
Consider a request for a specific invoice for a user:
localhost:3128/invoice/user/1/1
It will respond with the following data:
Output{"status":true,"transactions":[{"id":1,"name":"iPhone","user_id":1,"paid":0,"price":600,"invoice_id":1},{"id":2,"name":"Macbook","user_id":1,"paid":0,"price":1700,"invoice_id":1}]}
Your /invoice/user/:user_id/:invoice_id
route is now verified.
In this tutorial, you set up your server with all the needed routes for a lightweight invoicing application.
Continue your learning with How To Build a Lightweight Invoicing App with Node: User Interface.
]]>Command-line arguments are a way to provide additional input for commands. You can use command-line arguments to add flexibility and customization to your Node.js scripts.
In this article, you will learn about argument vectors, detecting argument flags, handling multiple arguments and values, and using the commander
package.
To follow through this tutorial, you’ll need:
This tutorial was verified with Node v16.10.0, npm
v7.12.2, and commander
v7.2.0.
Node.js supports a list of passed arguments, known as an argument vector. The argument vector is an array available from process.argv
in your Node.js script.
The array contains everything that’s passed to the script, including the Node.js executable and the path and filename of the script.
If you were to run the following command:
- node example.js -a -b -c
Your argument vector would contain five items:
[
'/usr/bin/node',
'/path/to/example.js',
'-a',
'-b',
'-c'
]
At the very least, a script that’s run without any arguments will still contain two items in the array, the node
executable and the script file that is being run.
Typically the argument vector is paired with an argument count (argc
) that tells you how many arguments have been passed in. Node.js lacks this particular variable but we can always grab the length
of the argument vector array:
if (process.argv.length === 2) {
console.error('Expected at least one argument!');
process.exit(1);
}
This example code will check the length
of argv
. A length of 2
would indicate that only the node
executable and the script file are present. If there are no arguments, it will print out the message: Expected at least one argument!
and exit
.
Let’s consider an example that displays a default message. However, when a specific flag is present, it will display a different message.
if (process.argv[2] && process.argv[2] === '-f') {
console.log('Flag is present.');
} else {
console.log('Flag is not present.');
}
This script checks if we have a third item in our argument vector. The index is 2
because arrays in JavaScript are zero-indexed. If a third item is present and is equal to -f
it will alter the output.
Here is an example of running the script without arguments:
- node example.js
And the generated output:
OutputFlag is not present.
Here is an example of running the script with arguments:
- node example.js -f
And the generated output:
OutputFlag is present.
We don’t have to limit ourselves to modifying the conditional control structure, we can use the actual value that’s been passed to the script as well:
const custom = (process.argv[2] || 'Default');
console.log('Custom: ', custom);
Instead of a conditional based on the argument, this script takes the value that is passed in (defaulting to "Default"
when the argument is missing) and injects it into the script output.
We have written a script that accepts an argument and one that accepts a raw value, what about in scenarios where we want to use a value in conjunction with an argument?
To make things a bit more complex, let’s also accept multiple arguments:
// Check to see if the -f argument is present
const flag = (
process.argv.indexOf('-f') > -1 ? 'Flag is present.' : 'Flag is not present.'
);
// Checks for --custom and if it has a value
const customIndex = process.argv.indexOf('--custom');
let customValue;
if (customIndex > -1) {
// Retrieve the value after --custom
customValue = process.argv[customIndex + 1];
}
const custom = (customValue || 'Default');
console.log('Flag:', `${flag}`);
console.log('Custom:', `${custom}`);
By using indexOf
instead of relying on specific index values, we are able to look for the arguments anywhere in the argument vector, regardless of the order!
Here is an example of running the script without arguments:
node example.js
And the generated output:
OutputFlag: Flag is not present.
Custom: Default
Here is an example of running the script with arguments:
- node example.js -f --custom Override
And the generated output:
OutputFlag: Flag is present.
Custom: Override
Now, your command-line script can accept multiple arguments and values.
commander
The aforementioned examples work when the argument input is quite specific. However, users may attempt to use arguments with and without equal signs (-nJaneDoe
or --name=JohnDoe
), quoted strings to pass-in values with spaces (-n "Jane Doe"
) and even have arguments aliased to provide short and longhand versions.
That’s where the commander
library can help.
commander
is a popular Node.js library that is inspired by the Ruby library of the same name.
First, in your project directory, initialize your project:
- npm init
Then, install commander
:
- npm install commander@7.2.0
Let’s take our previous example and port it to use commander
:
const commander = require('commander');
commander
.version('1.0.0', '-v, --version')
.usage('[OPTIONS]...')
.option('-f, --flag', 'Detects if the flag is present.')
.option('-c, --custom <value>', 'Overwriting value.', 'Default')
.parse(process.argv);
const options = commander.opts();
const flag = (options.flag ? 'Flag is present.' : 'Flag is not present.');
console.log('Flag:', `${flag}`);
console.log('Custom:', `${options.custom}`);
commander
does all of the hard work by processing process.argv
and adding the arguments and any associated values as properties in our commander
object.
We can easily version our script and report the version number with -v
or --version
. We also get some friendly output that explains the script’s usage by passing the --help
argument and if you happen to pass an argument that’s not defined or is missing a passed value, it will throw an error.
In this article, you learned about argument vectors, detecting argument flags, handling multiple arguments and values, and using the commander
package.
While you can quickly create scripts with your own command-line arguments, you may want to consider utilizing commander
or Inquirer.js
if you would like more robustness and maintainability.
When quickly creating Node applications, a fast way to template your application is sometimes necessary.
Jade comes as the default template engine for Express but Jade syntax can be overly complex for many use cases.
Embedded JavaScript templates (EJS) can be used as an alternative template engine.
In this article, you will learn how to apply EJS to an Express application, include repeatable parts of your site, and pass data to the views.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
If you would like to follow along with this article, you will need:
This tutorial was originally written for express
v4.17.1 and ejs
v3.1.5. It has been verified with Node v16.0.0, npm
v7.11.1, express
v4.17.1, and ejs
v3.1.6.
First, open your terminal window and create a new project directory:
- mkdir ejs-demo
Then, navigate to the newly created directory:
- cd ejs-demo
At this point, you can initialize a new npm project:
- npm init -y
Next, you will need to install the express
package:
- npm install express@4.17.1
Then install the ejs
package:
- npm install ejs@3.1.6
At this point, you have a new project ready to use Express and EJS.
server.js
With all of the dependencies installed, let’s configure the application to use EJS and set up the routes for the Index page and the About page.
Create a new server.js
file and open it with your code editor and add the following lines of code:
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// use res.render to load up an ejs view file
// index page
app.get('/', function(req, res) {
res.render('pages/index');
});
// about page
app.get('/about', function(req, res) {
res.render('pages/about');
});
app.listen(8080);
console.log('Server is listening on port 8080');
This code defines the application and listens on port 8080
.
This code also sets EJS as the view engine for the Express application using:
`app.set('view engine', 'ejs');`
Notice how the code sends a view to the user by using res.render()
. It is important to note that res.render()
will look in a views
folder for the view. So you only have to define pages/index
since the full path is views/pages/index
.
Next, you will create the views using EJS.
Like a lot of the applications you build, there will be a lot of code that is reused. These are considered partials. In this example, there will be three partials that will be reused on the Index page and About page: head.ejs
, header.ejs
, and footer.ejs
. Let’s make those files now.
Create a new views
directory:
- mkdir views
Then, create a new partials
subdirectory:
- mkdir views/partials
In this directory, create a new head.ejs
file and open it with your code editor. Add the following lines of code:
<meta charset="UTF-8">
<title>EJS Is Fun</title>
<!-- CSS (load bootstrap from a CDN) -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.2/css/bootstrap.min.css">
<style>
body { padding-top:50px; }
</style>
This code contains metadata for the head
for an HTML document. It also includes Bootstrap styles.
Next, create a new header.ejs
file and open it with your code editor. Add the following lines of code:
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="/">EJS Is Fun</a>
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="nav-link" href="/">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/about">About</a>
</li>
</ul>
</nav>
This code contains navigation for an HTML document and uses several classes from Bootstrap for styling.
Next, create a new footer.ejs
file and open it with your code editor. Add the following lines of code:
<p class="text-center text-muted">© Copyright 2020 The Awesome People</p>
This code contains copyright information and uses several classes from Bootstrap for styling.
Next, you will use these partials in index..ejs
and about.ejs
.
You have three partials defined. Now you can include
them in your views.
Use <%- include('RELATIVE/PATH/TO/FILE') %>
to embed an EJS partial in another file.
<%-
instead of just <%
to tell EJS to render raw HTML.Then, create a new pages
subdirectory:
- mkdir views/pages
In this directory, create a new index.ejs
file and open it with your code editor. Add the following lines of code:
<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
Save the changes to this file and then run the application:
- node server.js
If you visit http://localhost:8080/
in a web browser, you can observe the Index page:
Next, create a new about.ejs
file and open it with your code editor. Add the following lines of code:
<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="row">
<div class="col-sm-8">
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
</div>
</div>
<div class="col-sm-4">
<div class="well">
<h3>Look I'm A Sidebar!</h3>
</div>
</div>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
This code adds a Bootstrap sidebar to demonstrate how partials can be structured to reuse across different templates and pages.
Save the changes to this file and then run the application:
- node server.js
If you visit http://localhost:8080/about
in a web browser, you can observe the About page with a sidebar:
Now you can start using EJS for passing data from the Node application to the views.
Let’s define some basic variables and a list to pass to the Index page.
Revisit server.js
in your code editor and add the following lines of code inside the app.get('/')
route:
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// use res.render to load up an ejs view file
// index page
app.get('/', function(req, res) {
var mascots = [
{ name: 'Sammy', organization: "DigitalOcean", birth_year: 2012},
{ name: 'Tux', organization: "Linux", birth_year: 1996},
{ name: 'Moby Dock', organization: "Docker", birth_year: 2013}
];
var tagline = "No programming concept is complete without a cute animal mascot.";
res.render('pages/index', {
mascots: mascots,
tagline: tagline
});
});
// about page
app.get('/about', function(req, res) {
res.render('pages/about');
});
app.listen(8080);
console.log('Server is listening on port 8080');
This code defines an array called mascots
and a string called tagline
. Next, let’s use them in index.ejs
.
To echo a single variable, you can use <%= tagline %>
.
Revisit index.ejs
in your code editor and add the following lines of code:
<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
<h2>Variable</h2>
<p><%= tagline %></p>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
This code will display the tagline
value on the Index page.
To loop over data, you can use .forEach
.
Revisit index.ejs
in your code editor and add the following lines of code:
<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
<h2>Variable</h2>
<p><%= tagline %></p>
<ul>
<% mascots.forEach(function(mascot) { %>
<li>
<strong><%= mascot.name %></strong>
representing <%= mascot.organization %>,
born <%= mascot.birth_year %>
</li>
<% }); %>
</ul>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
Save the changes to this file and then run the application:
- node server.js
If you visit http://localhost:8080/
in a web browser, you can observe the Index page with the mascots
:
The EJS partial has access to all the same data as the parent view. But be careful. If you are referencing a variable in a partial, it needs to be defined in every view that uses the partial or it will throw an error.
You can also define and pass variables to an EJS partial in the include syntax like this:
...
<header>
<%- include('../partials/header', {variant: 'compact'}); %>
</header>
...
But you need to again be careful about assuming a variable has been defined.
If you want to reference a variable in a partial that may not always be defined, and give it a default value, you can do so like this:
...
<em>Variant: <%= typeof variant != 'undefined' ? variant : 'default' %></em>
...
In the line above, the EJS code is rendering the value of variant
if it’s defined, and default
if not.
In this article, you learned how to apply EJS to an Express application, include repeatable parts of your site, and pass data to the views.
EJS lets you build applications when you do not require additional complexity. By using partials and having the ability to easily pass variables to your views, you can build some great applications quickly.
Consult the EJS documentation for additional information on features and syntax. Consult Comparing JavaScript Templating Engines: Jade, Mustache, Dust and More for understanding the pros and cons of different view engines.
]]>In Node.js and Express applications, res.sendFile()
can be used to deliver files. Delivering HTML files using Express can be useful when you need a solution for serving static pages.
Note: Prior to Express 4.8.0, res.sendfile()
was supported. This lowercase version of res.sendFile()
has since been deprecated.
In this article, you will learn how to use res.sendFile()
.
Deploy your Node applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To complete this tutorial, you will need:
This tutorial was verified with Node v16.0.0, npm
v7.11.1, and express
v4.17.1.
First, open your terminal window and create a new project directory:
- mkdir express-sendfile-example
Then, navigate to the newly created directory:
- cd express-sendfile-example
At this point, you can initialize a new npm project:
- npm init -y
Next, you will need to install the express
package:
- npm install express@4.17.1
At this point, you have a new project ready to use Express.
Create a new server.js
file and open it with your code editor:
const express = require('express');
const app = express();
const port = process.env.PORT || 8080;
// sendFile will go here
app.listen(port);
console.log('Server started at http://localhost:' + port);
Revisit your terminal window and run your application:
- node server.js
After verifying your project is working as expected, you can use res.sendFile()
.
res.sendFile()
Revisit server.js
with your code editor and add path
, .get()
and res.sendFile()
:
const express = require('express');
const path = require('path');
const app = express();
const port = process.env.PORT || 8080;
// sendFile will go here
app.get('/', function(req, res) {
res.sendFile(path.join(__dirname, '/index.html'));
});
app.listen(port);
console.log('Server started at http://localhost:' + port);
When a request is made to the server, an index.html
file is served.
Create a new index.html
file and open it with your code editor:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Sample Site</title>
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<style>
body { padding-top: 50px; }
</style>
</head>
<body>
<div class="container">
<div class="jumbotron">
<h1>res.sendFile() Works!</h1>
</div>
</div>
</body>
</html>
This code will display the message: res.sendFile() Works!
.
Note: This tutorial makes use of BootstrapCDN for styling, but it is not required.
Save your changes. Then, open your terminal window again and re-run the server.
- node server.js
With the server running, visit http://localhost:8080
in a web browser:
Your application now uses res.sendFile()
to serve HTML files.
In this article, you learned how to use res.sendFile()
.
Continue your learning with Learn to Use the Express 4.0 Router and How To Retrieve URL and POST Parameters with Express.
]]>Often when you are building applications using Express, you will need to get information from your users. Two of the most popular methods are URL parameters and POST parameters.
In this article, you will learn how to use Express to retrieve URL parameters and POST parameters from requests.
To complete this tutorial, you will need:
Note: Previously, this tutorial recommended using req.param
. This is deprecated as of v4.11.0. This tutorial also recommended installing body-parser
. This is no longer necessary as of v4.16.0.
This tutorial was verified with Node v15.4.0, npm
v7.10.0, and express
v4.17.1.
First, open your terminal window and create a new project directory:
- mkdir express-params-example
Then, navigate to the newly created directory:
- cd express-params-example
At this point, you can initialize a new npm project:
- npm init -y
Next, you will need to install the express
package:
- npm install express@4.17.1
At this point, you have a new project ready to use Express.
Create a new server.js
file and open it with your code editor:
const express = require('express');
const app = express();
const port = process.env.PORT || 8080;
// routes will go here
app.listen(port);
console.log('Server started at http://localhost:' + port);
Revisit your terminal window and run your application:
- node server.js
You will have to restart the node server every time you edit server.js
. If this gets tedious, see How To Restart Your Node.js Apps Automatically with nodemon.
Now let’s create two routes now to test grabbing parameters.
req.query
with URL Parametersreq.query
can be used to retrieve values for URL parameters.
Consider the following example:
http://example.com/api/users?id=4&token=sdfa3&geo=us
This URL includes parameters for id
, token
, and geo
(geolocation):
id: 4
token: sdfa3
geo: us
Revisit server.js
with your code editor and add the following lines of code for req.query.id
, req.query.token
, and req.query.geo
:
// ...
// routes will go here
// ...
app.get('/api/users', function(req, res) {
const user_id = req.query.id;
const token = req.query.token;
const geo = req.query.geo;
res.send({
'user_id': user_id,
'token': token,
'geo': geo
});
});
app.listen(port);
console.log('Server started at http://localhost:' + port);
With the server running, use the URL http://localhost:8080/api/users?id=4&token=sdfa3&geo=us
in either a web browser or with Postman.
The server will respond back with the user_id
, token
, and geo
values.
req.params
with Routesreq.params
can be used to retrieve values from routes.
Consider the following URL:
http://localhost:8080/api/1
This URL includes routes for api
and :version
(1
).
Revisit server.js
with your code editor and add the following lines of code for req.params.version
:
// ...
// routes will go here
// ...
app.get('/api/:version', function(req, res) {
res.send(req.params.version);
});
app.listen(port);
console.log('Server started at http://localhost:' + port);
With the server running, use the URL http://localhost:8080/api/1
in either a web browser or with Postman.
The server will respond back with the version
value.
.param
with Route HandlersNext up, you are using the Express .param
function to grab a specific parameter. This is considered middleware and will run before the route is called.
This can be used for validations (like checking if a user exists) or grabbing important information about that user or item.
Consider the following URL:
http://localhost:8080/api/users/sammy
This URL includes routes for users
and :name
(Sammy
).
Revisit server.js
with your code editor and add the following lines of code for modifying the name
:
// ...
app.param('name', function(req, res, next, name) {
const modified = name.toUpperCase();
req.name = modified;
next();
});
// routes will go here
// ...
app.get('/api/users/:name', function(req, res) {
res.send('Hello ' + req.name + '!');
});
app.listen(port);
console.log('Server started at http://localhost:' + port);
With the server running, use the URL http://localhost:8080/api/users/sammy
in either a web browser or with Postman.
The server will respond back with:
OutputHello SAMMY!
You can use this param
middleware for validations and making sure that information passed through is valid and in the correct format.
Then save the information to the request (req
) so that the other routes will have access to it.
req.body
with POST Parametersexpress.json()
and express.urlencoded()
are built-in middleware functions to support JSON-encoded and URL-encoded bodies.
Open server.js
with your code editor and add the following lines of code:
const express = require('express');
const app = express();
const port = process.env.PORT || 8080;
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// ...
Next, add app.post
with req.body.id
, req.body.token
, and req.body.geo
:
// ...
// routes will go here
// ...
app.post('/api/users', function(req, res) {
const user_id = req.body.id;
const token = req.body.token;
const geo = req.body.geo;
res.send({
'user_id': user_id,
'token': token,
'geo': geo
});
});
app.listen(port);
console.log('Server started at http://localhost:' + port);
With the server running, generate a POST request with Postman.
Note: If you need assistance navigating the Postman interface for requests, consult the official documentation.
Set the request type to POST
and the request URL to http://localhost:8080/api/users
. Then set Body
to x-www-form-urlencoded
.
Then, provide the following values:
Key | Value |
---|---|
id | 4 |
token | sdfa3 |
geo | us |
After submitting the response, the server will respond back with the user_id
, token
, and geo
values.
In this article, you learned how to use Express to retrieve URL parameters and POST parameters from requests. This was achieved with req.query
, req.params
, and req.body
.
Continue your learning with Learn to Use the Express 4.0 Router and How To Deliver HTML Files with Express.
]]>Express is a web application framework for Node.js that allows you to spin up robust APIs and web servers in a much easier and cleaner way. It is a lightweight package that does not obscure the core Node.js features.
In this article, you will install and use Express to build a web server.
If you would like to follow along with this article, you will need:
This tutorial was verified with Node v15.14.0, npm
v7.10.0, express
v4.17.1, and serve-index
v1.9.1.
First, open your terminal window and create a new project directory:
- mkdir express-example
Then, navigate to the newly created directory:
- cd express-example
At this point, you can initialize a new npm project:
- npm init -y
Next, you will need to install the express
package:
- npm install express@4.17.1
At this point, you have a new project ready to use Express.
Now that Express is installed, create a new server.js
file and open it with your code editor. Then, add the following lines of code:
const express = require('express');
const app = express();
The first line here is grabbing the main Express module from the package you installed. This module is a function, which we then run on the second line to create our app
variable. You can create multiple apps this way, each with its own requests and responses.
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Successful response.');
});
These lines of code is where we tell our Express server how to handle a GET
request to our server. Express includes similar functions for POST
, PUT
, etc. using app.post(...)
, app.put(...)
, etc.
These functions take two main parameters. The first is the URL for this function to act upon. In this case, we are targeting '/'
, which is the root of our website: in this case, localhost:3000
.
The second parameter is a function with two arguments: req
, and res
. req
represents the request that was sent to the server; We can use this object to read data about what the client is requesting to do. res
represents the response that we will send back to the client.
Here, we are calling a function on res to send back a response: 'Successful response.'
.
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Successful response.');
});
app.listen(3000, () => console.log('Example app is listening on port 3000.'));
Finally, once we’ve set up our requests, we must start our server! We are passing 3000
into the listen
function, which tells the app which port to listen on. The function passed in as the second parameter is optional and runs when the server starts up. This provides us some feedback in the console to know that our application is running.
Revisit your terminal window and run your application:
- node server.js
Then, visit localhost:3000
in your web browser. Your browser window will display: 'Successful response'
. Your terminal window will display: 'Example app is listening on port 3000.'
.
And there we have it, a web server! However, we definitely want to send more than just a single line of text back to the client. Let’s briefly cover what middleware is and how to set this server up as a static file server!
With Express, we can write and use middleware functions, which have access to all HTTP requests coming to the server. These functions can:
We can write our own middleware functions or use third-party middleware by importing them the same way we would with any other package.
Let’s start by writing our own middleware, then we’ll try using some existing middleware to serve static files.
To define a middleware function, we call app.use()
and pass it a function. Here’s a basic middleware function to print the current time in the console during every request:
const express = require('express');
const app = express();
app.use((req, res, next) => {
console.log('Time: ', Date.now());
next();
});
app.get('/', (req, res) => {
res.send('Successful response.');
});
app.listen(3000, () => console.log('Example app is listening on port 3000.'));
The next()
call tells the middleware to go to the next middleware function if there is one. This is important to include at the end of our function - otherwise, the request will get stuck on this middleware.
We can optionally pass a path to the middleware, which will only handle requests to that route. For example:
const express = require('express');
const app = express();
app.use((req, res, next) => {
console.log('Time: ', Date.now());
next();
});
app.use('/request-type', (req, res, next) => {
console.log('Request type: ', req.method);
next();
});
app.get('/', (req, res) => {
res.send('Successful response.');
});
app.listen(3000, () => console.log('Example app is listening on port 3000.'));
By passing '/request-type'
as the first argument to app.use()
, this function will only run for requests sent to localhost:3000/request-type
.
Revisit your terminal window and run your application:
- node server.js
Then, visit localhost:3000/request-type
in your web browser. Your terminal window will display the timestamp of the request and 'Request type: GET'
.
Now, let’s try using existing middleware to serve static files. Express comes with a built-in middleware function: express.static
. We will also use a third-party middleware function, serve-index
, to display an index listing of our files.
First, inside the same folder where the express server is located, create a directory named public
and put some files in there.
Then, install the package serve-index
:
- npm install serve-index@1.9.1
First, import the serve-index
package at the top of the server file.
Then, include the express.static
and serveIndex
middlewares and tell them the path to access from and the name of the directory:
const express = require('express');
const serveIndex = require('serve-index');
const app = express();
app.use((req, res, next) => {
console.log('Time: ', Date.now());
next();
});
app.use('/request-type', (req, res, next) => {
console.log('Request type: ', req.method);
next();
});
app.use('/public', express.static('public'));
app.use('/public', serveIndex('public'));
app.get('/', (req, res) => {
res.send('Successful response.');
});
app.listen(3000, () => console.log('Example app is listening on port 3000.'));
Now, restart your server and navigate to localhost:3000/public
. You will be presented with a listing of all your files!
In this article, you installed and used Express to build a web server. You also used built-in and third-party middleware functions.
Continue your learning with How To Use the req Object in Express, How To Use the res Object in Express, and How To Define Routes and HTTP Request Methods in Express.
]]>Inquirer.js is a collection of common interactive command-line user interfaces. This includes typing answers to questions or selecting a choice from a list.
The inquirer
package provides several default prompts and is highly configurable. It is also extensible by way of a plug-in interface. It even supports promises and async/await
syntax.
In this article, you will install and explore some of the features of Inquirer.js.
If you would like to follow along with this article, you will need:
This tutorial was verified with Node v15.14.0, npm
v7.10.0, and inquirer
v8.0.0.
First, open your terminal window and create a new project directory:
mkdir inquirer-example
Then, navigate to this directory:
cd inquirer-example
To start adding prompts to your Node.js scripts, you will need to install the inquirer
package:
- npm install inquirer
At this point, you have a new project ready to use Inquirer.js.
Now, create a new index.js
file in your project directory and open it with your code editor.
Within your script, be sure to require inquirer
:
const inquirer = require('inquirer');
Add a prompt asking the user for their favorite reptile:
const inquirer = require('inquirer');
inquirer
.prompt([
{
name: 'faveReptile',
message: 'What is your favorite reptile?'
},
])
.then(answers => {
console.info('Answer:', answers.faveReptile);
});
Revisit your terminal window and run the script:
- node index.js
You will be presented with a prompt:
Output? What is your favorite reptile?
Providing an answer will display the response:
Output? What is your favorite reptile? Crocodiles
Answer: Crocodiles
You can provide a default
value that allows the user to press ENTER
without submitting any answer:
const inquirer = require('inquirer');
inquirer
.prompt([
{
name: 'faveReptile',
message: 'What is your favorite reptile?',
default: 'Alligators'
},
])
.then(answers => {
console.info('Answer:', answers.faveReptile);
});
Run the script again, and you will be presented with a prompt:
Output? What is your favorite reptile? (Alligators)
Pressing ENTER
without an answer will submit the default answer:
Output? What is your favorite reptile? Alligators
Answer: Alligators
Now, you can create prompts and set default values.
You may have noticed the .prompt()
method accepts an array or objects. That’s because you can string a series of prompt questions together and all of the answers will be available by name as part of the answers
variable once all of the prompts have been resolved.
Revisit index.js
in your code editor and add a prompt asking the user for their favorite color:
const inquirer = require('inquirer');
inquirer
.prompt([
{
name: 'faveReptile',
message: 'What is your favorite reptile?',
default: 'Alligators'
},
{
name: 'faveColor',
message: 'What is your favorite color?',
default: '#008f68'
},
])
.then(answers => {
console.info('Answers:', answers);
});
Run the script again, and you will be presented with two prompts:
Output? What is your favorite reptile? Alligators
? What is your favorite color? #008f68
Answers: { faveReptile: 'Alligators', faveColor: '#008f68' }
Now, you can create multiple prompts.
inquirer
does support more than prompting a user for text input. For the sake of example, the following types will be showcased by themselves but you very well could chain them together by passing them in the same array.
The list
type allows you to present the user with a fixed set of options to pick from, instead of a free form input as the input
type provides:
const inquirer = require('inquirer');
inquirer
.prompt([
{
type: 'list',
name: 'reptile',
message: 'Which is better?',
choices: ['alligator', 'crocodile'],
},
])
.then(answers => {
console.info('Answer:', answers.reptile);
});
Revisit your terminal window and run the script:
- node list.js
You will be presented with a list
prompt:
Output? Which is better? (Use arrow keys)
❯ alligator
crocodile
The user can ARROW UP
and ARROW DOWN
keys to navigate the list of choices. j
and k
keyboard navigation is also available.
The rawlist
type is similar to list
. It displays a list of choices and allows the user to enter the index of their choice (starting at 1
):
const inquirer = require('inquirer');
inquirer
.prompt([
{
type: 'rawlist',
name: 'reptile',
message: 'Which is better?',
choices: ['alligator', 'crocodile'],
},
])
.then(answers => {
console.info('Answer:', answers.reptile);
});
Revisit your terminal window and run the script:
- node list.js
You will be presented with a rawlist
prompt:
Output? Which is better?
1) alligator
2) crocodile
Answer:
Submitting an invalid value will result in a "Please enter a valid index"
error.
The expand
type is reminiscent of some command-line applications that present you with a list of characters that map to functionality that can be entered. expand
prompts will initially present the user with a list of the available character values and give context to them when the key is pressed:
const inquirer = require('inquirer');
inquirer
.prompt([
{
type: 'expand',
name: 'reptile',
message: 'Which is better?',
choices: [
{
key: 'a',
value: 'alligator',
},
{
key: 'c',
value: 'crocodile',
},
],
},
])
.then(answers => {
console.info('Answer:', answers.reptile);
});
Revisit your terminal window and run the script:
- node expand.js
You will be presented with a expand
prompt:
Output? Which is better? (acH)
By default, the H
option is included which stands for "Help"
and upon entering H
and hitting ENTER
will switch to a list of the options, indexed by their characters that can then be entered to make a selection.
Output? Which is better? (acH)
a) alligator
c) crocodile
h) Help, list all options
Answer:
Submitting an invalid value will result in a "Please enter a valid command"
error.
The checkbox
type is also similar to list
. Instead of a single selection, it allows you to select multiple choices.
const inquirer = require('inquirer');
inquirer
.prompt([
{
type: 'checkbox',
name: 'reptiles',
message: 'Which reptiles do you love?',
choices: [
'Alligators', 'Snakes', 'Turtles', 'Lizards',
],
},
])
.then(answers => {
console.info('Answer:', answers.reptiles);
});
Revisit your terminal window and run the script:
- node checkbox.js
You will be presented with a checkbox
prompt:
Output? Which reptiles do you love? (Press <space> to select, <a> to toggle all, <i> to invert selection)
❯◯ Alligators
◯ Snakes
◯ Turtles
◯ Lizards
Similar to the other list
types, you can use the arrow keys to navigate. To make a selection, you hit SPACE
and can also select all with a
or invert your selection with i
.
OutputAnswer: [ 'Alligators', 'Snakes', 'Turtles', 'Lizards' ]
Unlike the other prompt types, the answer for this prompt type will return an array instead of a string. It will always return an array, even if the user opted to not select any items.
The password
type will hide input from the user. This allows users to provide sensitive information that should not be seen:
const inquirer = require('inquirer');
inquirer
.prompt([
{
type: 'password',
name: 'secret',
message: 'Tell me a secret',
},
])
.then(answers => {
// Displaying the password for debug purposes only.
console.info('Answer:', answers.secret);
});
Revisit your terminal window and run the script:
- node password.js
You will be presented with a password
prompt:
Output? Tell me a secret [hidden]
The input is hidden from the user.
The editor
type allows users to use their default text editor for larger text inputs.
const inquirer = require('inquirer');
inquirer
.prompt([
{
type: 'editor',
name: 'story',
message: 'Tell me a story, a really long one!',
},
])
.then(answers => {
console.info('Answer:', answers.story);
});
Revisit your terminal window and run the script:
- node editor.js
You will be presented with an editor
prompt:
Output? Tell me a story, a really long one! Press <enter> to launch your preferred editor.
inquirer
will attempt to open a text editor on the user’s system based on the value of the $EDITOR
and $VISUAL
environment variables. If neither are present, vim
(Linux) and notepad.exe
(Windows) will be used instead.
In this article, you installed and explored some of the features of Inquirer.js. This tool can be useful for retrieving information from users.
Continue your learning with some of the plugins. Like inquirer-autocomplete-prompt
, inquirer-search-list
, and inquirer-table-prompt
.
Koa is a new web framework created by the team behind Express. It aims to be a modern and more minimalist version of Express.
Some of its characteristics are its support and reliance on new JavaScript features such as generators and async/await. Koa also does not ship with any middleware though it can be extended using custom and existing plugins.
In this article, you will learn more about the Koa framework and build an app to get familiar with its functionality and philosophy.
If you would like to follow along with this article, you will need:
Note: This tutorial has been revised from Koa 1.0 to Koa 2.0. Refer to the migration documentation for updating your 1.0 implementations.
This tutorial was verified with Node v15.14.0, npm
v7.10.0, koa
v2.13.1, @koa/router
v10.0.0, and koa-ejs
v4.3.0.
To begin, create a new directory for your project. This can be done by copying and running the command below in your terminal:
- mkdir koa-example
Note: You can give your project any name, but this article will be using koa-example
as the project name and directory.
At this point, you have created your project directory koa-example
. Navigate to the newly created project directory.
- cd koa-example
Then, initialize your Node project from inside the directory.
- npm init -y
After running the npm init
command, you will have a package.json
file with the default configuration.
Next, run this command to install Koa:
- npm install koa@2.13.1
Your application is now ready to use Koa.
First, create the index.js
file. Then, using your code editor of choice, open the index.js
file and add the following lines of code:
'use strict';
const Koa = require('koa');
const app = new Koa();
app.use(ctx => {
ctx.body = 'Hello World';
});
app.listen(1234);
In the code above, you created a Koa application that runs on port 1234
. You can run the application using the command:
- node index.js
And visit the application on http://localhost:1234
.
As mentioned earlier, Koa.js does not ship with any contained middleware and unlike its predecessor, Express, it does not handle routing by default.
In order to implement routes in your Koa app, you will install a middleware library for routing in Koa, Koa Router.
Open your terminal window and run the following command:
- npm install @koa/router@10.0.0
Note: Previously koa-router
was the recommended package, but the @koa/router
is now the officially supported package.
To make use of the router in your application, amend your index.js
file:
'use strict';
const Koa = require('koa');
const Router = require('@koa/router');
const app = new Koa();
const router = new Router();
router.get('koa-example', '/', (ctx) => {
ctx.body = 'Hello World';
});
app
.use(router.routes())
.use(router.allowedMethods());
app.listen(1234);
This code defines a route on the base URL of your application (http://localhost:1234
) and registers this route to your Koa application.
For more information on route definition in Koa.js applications, visit the Koa Router library documentation.
As previously established, Koa comes as a minimalistic framework, therefore, to implement view rendering with a template engine you will have to install a middleware library. There are several libraries to choose from but in this article, you will use Koa ejs.
Open your terminal window and run the following command:
- npm install koa-ejs@4.3.0
Next, amend your index.js
file to register your templating with the snippet below:
'use strict';
const Koa = require('koa');
const Router = require('@koa/router');
const render = require('koa-ejs');
const path = require('path');
const app = new Koa();
const router = new Router();
render(app, {
root: path.join(__dirname, 'views'),
layout: 'index',
viewExt: 'html',
cache: false,
debug: true
});
router.get('koa-example', '/', (ctx) => {
ctx.body = 'Hello World';
});
app
.use(router.routes())
.use(router.allowedMethods());
app.listen(1234);
In your template registering, you defined the root directory of your view files, the extension of the view files, and the base view file (which other views extend).
Now that you have registered your template middleware, amend your route definition to render a template file:
// ...
router.get('koa-example', '/', (ctx) => {
let koalaFacts = [];
koalaFacts.push({
meta_name: 'Color',
meta_value: 'Black and white'
});
koalaFacts.push({
meta_name: 'Native Country',
meta_value: 'Australia'
});
koalaFacts.push({
meta_name: 'Animal Classification',
meta_value: 'Mammal'
});
koalaFacts.push({
meta_name: 'Life Span',
meta_value: '13 - 18 years'
});
koalaFacts.push({
meta_name: 'Are they bears?',
meta_value: 'No'
});
return ctx.render('index', {
attributes: koalaFacts
});
})
// ...
Your base route renders the index.html
file found in the views
directory.
Now, create this directory and file. The open index.html
and add the following lines of code:
<h2>Koala - a directory Koala of attributes</h2>
<ul class="list-group">
<% attributes.forEach( function(attribute) { %>
<li class="list-group-item">
<%= attribute.meta_name %> - <%= attribute.meta_value %>
</li>
<% }) %>
</ul>
Now, when running the application and observing it in a web browser will display the following:
OutputKoala - a directory Koala of attributes
Color - Black and white
Native Country - Australia
Animal Classification - Mammal
Life Span - 13 - 18 years
Are they bears? - No
For more options with using the koa-ejs
template middleware, visit the library documentation.
Koa handles errors by defining an error middleware early in your entry point file. The error middleware must be defined early because only errors from middleware defined after the error middleware will be caught.
Using your index.js
file as an example, make the following changes to the code:
'use strict';
const Koa = require('koa');
const Router = require('@koa/router');
const render = require('koa-ejs');
const path = require('path');
const app = new Koa();
const router = new Router();
app.use(async (ctx, next) => {
try {
await next()
} catch(err) {
console.log(err.status)
ctx.status = err.status || 500;
ctx.body = err.message;
}
});
// ...
This block of code will catch any error thrown during the execution of your application.
You can test this by throwing an error in the function body of the route you defined:
// ...
router.get('error', '/error', (ctx) => {
ctx.throw(500, 'internal server error');
});
app
.use(router.routes())
.use(router.allowedMethods());
app.listen(1234);
Now, when running the application and observing /error
in a web browser will display the following:
Outputinternal server error
The Koa response object is usually embedded in its context object. Using route definition, let’s show an example of setting responses:
// ...
router.get('status', '/status', (ctx) => {
ctx.status = 200;
ctx.body = 'ok';
})
app
.use(router.routes())
.use(router.allowedMethods());
app.listen(1234);
Now, when running the application and observing /status
in a web browser will display the following:
Outputok
Your application now handles errors and responses.
In this article, you had a brief introduction to Koa and how to implement some common functionalities in a Koa project. Koa is a minimalist and flexible framework that can be extended to more functionality than this article has shown. Because of its futuristic similarity to Express, some have even described it as Express 5.0 in spirit.
]]>Environment variables allow you to switch between your local development, testing, staging, user acceptance testing (UAT), production, and any other environments that are part of your project’s workflow.
Instead of passing variables into your scripts individually, env-cmd
lets you group variables into an environment file (.env
) and pass them to your script.
In this article, you will install and use env-cmd
in an example project.
To complete this tutorial, you will need:
.gitignore
. This may require installing and configuring git
if you wish to follow along.Note: This tutorial has been updated to use the commands for env-cmd
after version 9.0.0.
This tutorial was verified with Node v15.14.0, npm
v7.10.0, and env-cmd
v10.0.1.
This tutorial assumes you have a new project. Create a new directory:
- mkdir env-cmd-example
Then navigate to the directory:
- cd env-cmd-example
It is generally considered a bad practice to commit your environment files to your version control system. If the repository is forked or shared, the credentials will be available to others as they will be forever recorded in the project history.
It is recommended to add the file to your .gitignore
.
Note: This will not be required for the scope of this tutorial, but is presented here for educational purposes.
Initialize a new git
project:
- git init
Create a .gitignore
file and add the patterns to exclude your environment file:
.env
.env.js
.env.json
.env-cmdrc
For this tutorial, you can exclude .env
, .env.js
, .env.json
, .env-cmdrc
.
Then, create an .env
file for the project.
Open the file in your code editor and add the following line of code:
creature=shark
green=#008f68
yellow=#fae042
This defines creature
to shark
, green
to #008f68
, yellow
to #fae042
.
Then, create a new log.js
file:
console.log('NODE_ENV:', process.env.NODE_ENV);
console.log('Creature:', process.env.creature);
console.log('Green:', process.env.green);
console.log('Yellow:', process.env.yellow);
This will log the previously defined variables to the console. And it will also print out the NODE_ENV
value.
Now, your example is prepared for using environment files with env-cmd
.
env-cmd
Modern npm
and yarn
can run env-cmd
without making it a dependency.
Use either npx:
- npx env-cmd node log.js
Or yarn run
:
- yarn run env-cmd node log.js
Otherwise, you can install the package as a dependency or devDependency:
- npm install env-cmd@10.0.1
The env-cmd
package installs an executable script named env-cmd
which can be called before your scripts to easily load environment variables from an external file.
Depending on your setup, you can reference env-cmd
in a few different ways.
Perhaps the most compatible across package managers is to add a custom script to your package.json
file:
{
"scripts": {
"print-log": "env-cmd node log.js"
}
}
For example, with npm
, you will be able to run this custom script with the following command:
- npm run print-log
If you would prefer to use env-cmd
directly from the command line, you can call it directly from node_modules
:
- ./node_modules/.bin/env-cmd node log.js
Going forward, this tutorial will use the npx
approach, but all approaches are designed to work similarly.
Now, use one of the approaches in your terminal.
Regardless of how you choose to run the script, env-cmd
will load the .env
file, and the logging script will report back the variables.
OutputNODE_ENV: undefined
Creature: shark
Green: #008f68
Yellow: #fae042
You may have noticed that the NODE_ENV
value is undefined
. That’s because NODE_ENV
was not defined in the .env
file.
It is possible to pass in NODE_ENV
before calling env-cmd
.
For example, here is the command for npx
:
- NODE_ENV=development npx env-cmd node log.js
Run the command again with the NODE_ENV
defined:
OutputNODE_ENV: development
Creature: shark
Green: #008f68
Yellow: #fae042
At this point, you have learned to use env-cmd
with an .env
file.
env-cmd
by default expects an .env
file in the project root directory. However, you can change the file type and path with the --file
(-f
) option.
Regardless of how you reference it, you have a wide variety of file formats available to store your environment variables.
Here is an example of an .env.json
file:
{
"creature": "shark",
"green": "#008f68",
"yellow": "#fae042"
}
And here is an example of using this file with env-cmd
:
- NODE_ENV=development npx env-cmd --file .env.json node log.js
Now you have learned how to use a JSON environment file.
Here is an example of an .env.js
file:
module.exports = {
creature: 'shark',
green: '#008f68',
yellow: '#fae042'
};
And here is an example of using this file with env-cmd
:
- NODE_ENV=development npx env-cmd --file .env.js node log.js
Now you have learned how to use a JavaScript environment file.
The rc
file format is special because it allows you to define multiple environments in a single JSON file and reference the environment by name instead of by file.
The “runcom” file is also special in that it must be named .env-cmdrc
and be present at the root of your project.
Here is an example of an .env-cmdrc
file with environments defined for development
, staging
, and production
:
{
"development": {
"NODE_ENV": "development",
"creature": "shark",
"green": "#008f68",
"yellow": "#fae042",
"otherVar1": 1
},
"staging": {
"NODE_ENV": "staging",
"creature": "whale",
"green": "#6db65b",
"yellow": "#efbb35",
"otherVar2": 2
},
"production": {
"NODE_ENV": "production",
"creature": "octopus",
"green": "#4aae9b",
"yellow": "#dfa612",
"otherVar3": 3
}
}
Using the .env-cmdrc
values will require an --environments
(-e
) option.
Then you can reference a single environment:
- npx env-cmd --environments development node log.js
You can even reference multiple environments, which will merge together each of the environment’s variables, with the last environment taking precedence if there are overlapping variables:
- npx env-cmd --environments development,staging,production node log.js
By specifying all three of our environments, each of the otherVar
values will be set, with the rest of the variables being sourced from the final environment listed, production
.
--fallback
In situations where a custom environment file is not present:
npx env-cmd -f .env.missing node log.js
env-cmd
will throw an error:
OutputError: Failed to find .env file at path: .env.missing
In situations where there is an unexpected problem with the custom env file path, env-cmd
can attempt to load an .env
file from the root of your project. To do so, pass in the --fallback
flag:
- npx env-cmd --file .env.missing --fallback node log.js
Now, if there is a valid .env
file to fall back to, this command will not display any errors.
--no-override
There are situations where you may want to keep all or some of the variables already set in the environment.
To respect the existing environment variables instead of using the values in your .env
file, pass env-cmd
the --no-override
flag:
- NODE_ENV=development creature=squid npx env-cmd --no-override node log.js
This will result in the following output:
OutputNODE_ENV: development
Creature: squid
Green: #008f68
Yellow: #fae042
Notice that the creature
value has been set to squid
instead of shark
which was defined in the .env
file.
In this article, you installed and used env-cmd
in an example project.
Using environment files can help you switch between “development” and “production” environments.
]]>Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, macOS, FreeBSD, and Windows. Though you can run Node.js applications at the command line, this tutorial will focus on running them as a service. This means that they will restart on reboot or failure and are safe for use in a production environment.
In this tutorial, you will set up a production-ready Node.js environment on a single Ubuntu 20.04 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS using a free certificate provided by Let’s Encrypt.
Deploy your Node applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
This guide assumes that you have the following:
When you’ve completed the prerequisites, you will have a server serving your domain’s default placeholder page at https://example.com/
.
Let’s begin by installing the latest LTS release of Node.js, using the NodeSource package archives.
First, install the NodeSource PPA in order to get access to its contents. Make sure you’re in your home directory, and use curl
to retrieve the installation script for the most recent LTS version of Node.js from its archives.
- cd ~
- curl -sL https://deb.nodesource.com/setup_14.x -o nodesource_setup.sh
You can inspect the contents of this script with nano
or your preferred text editor:
- nano nodesource_setup.sh
When you’re done inspecting the script, run it under sudo
:
- sudo bash nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from Nodesource, you can install the Node.js package:
- sudo apt install nodejs
To check which version of Node.js you have installed after these initial steps, type:
- node -v
Outputv14.4.0
Note: When installing from the NodeSource PPA, the Node.js executable is called nodejs
, rather than node
.
The nodejs
package contains the nodejs
binary as well as npm
, a package manager for Node modules, so you don’t need to install npm
separately.
npm
uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm
. Execute this command to verify that npm
is installed and to create the configuration file:
- npm -v
Output6.14.5
In order for some npm
packages to work (those that require compiling code from source, for example), you will need to install the build-essential
package:
- sudo apt install build-essential
You now have the necessary tools to work with npm
packages that require compiling code from source.
With the Node.js runtime installed, let’s move on to writing a Node.js application.
Let’s write a Hello World application that returns “Hello World” to any HTTP requests. This sample application will help you get Node.js set up. You can replace it with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.
First, let’s create a sample application called hello.js
:
- cd ~
- nano hello.js
Insert the following code into the file:
const http = require('http');
const hostname = 'localhost';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
Save the file and exit the editor.
This Node.js application listens on the specified address (localhost
) and port (3000
), and returns “Hello World!” with a 200
HTTP success code. Since we’re listening on localhost
, remote clients won’t be able to connect to our application.
To test your application, type:
- node hello.js
You will receive the following output:
OutputServer running at http://localhost:3000/
Note: Running a Node.js application in this manner will block additional commands until the application is killed by pressing CTRL+C
.
To test the application, open another terminal session on your server, and connect to localhost
with curl
:
- curl http://localhost:3000
If you get the following output, the application is working properly and listening on the correct address and port:
OutputHello World!
If you do not get the expected output, make sure that your Node.js application is running and configured to listen on the proper address and port.
Once you’re sure it’s working, kill the application (if you haven’t already) by pressing CTRL+C
.
Next let’s install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.
Use npm
to install the latest version of PM2 on your server:
- sudo npm install pm2@latest -g
The -g
option tells npm
to install the module globally, so that it’s available system-wide.
Let’s first use the pm2 start
command to run your application, hello.js
, in the background:
- pm2 start hello.js
This also adds your application to PM2’s process list, which is outputted every time you start an application:
Output...
[PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/sammy/hello.js in fork_mode (1 instance)
[PM2] Done.
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ hello │ fork │ 0 │ online │ 0% │ 25.2mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
As indicated above, PM2 automatically assigns an App name
(based on the filename, without the .js
extension) and a PM2 id
. PM2 also maintains other information, such as the PID
of the process, its current status, and memory usage.
Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but we can take an additional step to get the application to launch on system startup using the startup
subcommand. This subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots:
- pm2 startup systemd
The last line of the resulting output will include a command to run with superuser privileges in order to set PM2 to start on boot:
Output[PM2] Init System found: systemd
sammy
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Run the command from the output, with your username in place of sammy
:
- sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
As an additional step, we can save the PM2 process list and corresponding environments:
- pm2 save
You have now created a systemd unit that runs pm2
for your user on boot. This pm2
instance, in turn, runs hello.js
.
Start the service with systemctl
:
- sudo systemctl start pm2-sammy
If at this point you encounter an error, you may need to reboot, which you can achieve with sudo reboot
.
Check the status of the systemd unit:
- systemctl status pm2-sammy
For a detailed overview of systemd, please review Systemd Essentials: Working with Services, Units, and the Journal.
In addition to those we have covered, PM2 provides many subcommands that allow you to manage or look up information about your applications.
Stop an application with this command (specify the PM2 App name
or id
):
- pm2 stop app_name_or_id
Restart an application:
- pm2 restart app_name_or_id
List the applications currently managed by PM2:
- pm2 list
Get information about a specific application using its App name
:
- pm2 info app_name
The PM2 process monitor can be pulled up with the monit
subcommand. This displays the application status, CPU, and memory usage:
- pm2 monit
Note that running pm2
without any arguments will also display a help page with example usage.
Now that your Node.js application is running and managed by PM2, let’s set up the reverse proxy.
Your application is running and listening on localhost
, but you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.
In the prerequisite tutorial, you set up your Nginx configuration in the /etc/nginx/sites-available/example.com
file. Open this file for editing:
- sudo nano /etc/nginx/sites-available/example.com
Within the server
block, you should have an existing location /
block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number:
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
This configures the server to respond to requests at its root. Assuming our server is available at example.com
, accessing https://example.com/
via a web browser would send the request to hello.js
, listening on port 3000
at localhost
.
You can add additional location
blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 3001
, you could add this location block to allow access to it via https://example.com/app2
:
server {
...
location /app2 {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
Once you are done adding the location blocks for your applications, save the file and exit your editor.
Make sure you didn’t introduce any syntax errors by typing:
- sudo nginx -t
Restart Nginx:
- sudo systemctl restart nginx
Assuming that your Node.js application is running, and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your server’s URL (its public IP address or domain name).
Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on an Ubuntu 20.04 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share.
]]>Mongoose is one of the fundamental tools for manipulating data for a Node.js and MongoDB backend.
In this article, you will be looking into using Mongoose with the MongoDB Atlas remote database. The example in this tutorial will consist of a list of food and their caloric values. Users will be able to create new items, read items, update items, and delete items.
async
and await
.Downloading and installing a tool like Postman is recommended for testing API endpoints.
This tutorial was verified with Node v15.3.0, npm
v7.4.0, express
v4.17.1, mongoose
v5.11.12, and MongoDB v4.2.
This project also requires a MongoDB Atlas account.
After creating an account and signing in, follow these steps to deploy a free tier cluster.
Once you have set a cluster, a database user, and an IP address you will be prepared for later acquiring the connection string as you set up the rest of your project.
In this section, you will create a directory for your project and install dependencies.
Create a new directory for your project:
- mkdir mongoose-mongodb-atlas-example
Navigate to the newly created directory:
- cd mongoose-mongodb-atlas-example
At this point, you can initialize a new npm project:
- npm init -y
Next, install express
and mongoose
:
- npm install express@4.17.1 mongoose@5.11.12
At this point, you will have a new project with express
and mongoose
.
In this section, you will create a new file to run the Express server, connect to the MongoDB Atlas database, and import future routes.
Create a new server.js
file and add the following lines of code:
const express = require("express");
const mongoose = require("mongoose");
const foodRouter = require("./routes/foodRoutes.js");
const app = express();
app.use(express.json());
mongoose.connect(
"mongodb+srv://madmin:<password>@clustername.mongodb.net/<dbname>?retryWrites=true&w=majority",
{
useNewUrlParser: true,
useFindAndModify: false,
useUnifiedTopology: true
}
);
app.use(foodRouter);
app.listen(3000, () => {
console.log("Server is running...");
});
Pay attention to the connection string. This is the connection string that is provided by MongoDB Atlas. You will need to replace the administrator account (madmin
), password, cluster name (clustername
), and database name (dbname
) with the values that are relavent to your cluster:
mongodb+srv://madmin:<password>@clustername.mongodb.net/<dbname>?retryWrites=true&w=majority
mongoose.connect()
will take the connection string and an object of configuration options. For the purpose of this tutorial, the useNewUrlParser
, useFindAndModify
, and useUnifiedTopology
configuration settings are necessary to avoid a deprecation warning.
At this point, you have the start of an Express server. You will next need to define the schema and handle routes.
First, you will need to have a pattern to structure your data on to and these patterns are referred to as schemas. Schemas allow you to decide exactly what data you want and what options you want the data to have as an object.
In this tutorial, you will use the mongoose.model
method to make it usable with actual data and export it as a variable you can use in foodRoutes.js
.
Create a new models
directory:
- mkdir models
Inside of this new directory, create a new food.js
file and add the following lines of code:
const mongoose = require("mongoose");
const FoodSchema = new mongoose.Schema({
name: {
type: String,
required: true,
trim: true,
lowercase: true,
},
calories: {
type: Number,
default: 0,
validate(value) {
if (value < 0) throw new Error("Negative calories aren't real.");
},
},
});
const Food = mongoose.model("Food", FoodSchema);
module.exports = Food;
This code defines your FoodSchema
. It will consist of a name
value that is of type String
, it will be required
, trim
any whitespace, and set to lowercase
characters. It will also consist of a calories
value that is of type Number
, it will have a default
of 0, and validate
to ensure no negative numbers are submitted.
Once you have your data model set up, you can start setting up routes to use it. This will utilize various querying functions available through Mongoose.
You will start by reading all foods in the database. At this point, it will be an empty array.
Create a new routes
directory:
- mkdir routes
Inside of this new directory, create a new foodRoutes.js
file and add the following lines of code:
const express = require("express");
const foodModel = require("../models/food");
const app = express();
app.get("/foods", async (request, response) => {
const foods = await foodModel.find({});
try {
response.send(foods);
} catch (error) {
response.status(500).send(error);
}
});
module.exports = app;
This code establishes a /foods
endpoint for GET requests (note the plural ‘s’). The Mongoose query function find()
returns all objects with matching parameters. Since no parameters have been provided, it will return all of the items in the database.
Since Mongoose functions are asynchronous, you will be using async/await
. Once you have the data this code uses a try/catch
block to send it. This will be useful to verify the data with Postman.
Navigate to the root of your project directory and run your Express server with the following command in your terminal:
- node server.js
In Postman, create a new Read All Food request. Ensure the request type is set to GET
. Set the request URL to localhost:3000/foods
. And click Send.
Note: If you need assistance navigating the Postman interface for requests, consult the official documentation.
The Postman results will display an empty array.
Next, you will build the functionality to create a new food item and save it to the database.
Revisit the foodRoutes.js
file and add the following lines of code between app.get
and module.exports
:
// ...
app.post("/food", async (request, response) => {
const food = new foodModel(request.body);
try {
await food.save();
response.send(food);
} catch (error) {
response.status(500).send(error);
}
});
// ...
This code establishes a /food
endpoint for POST requests. The Mongoose query function .save()
is used to save data passed to it to the database.
In Postman, create a new request called Create New Food. Ensure the request type is set to POST
. Set the request URL to localhost:3000/food
.
In the Body section, select raw and JSON. Then, add a new food item by constructing a JSON object with a name
and calories
:
{
"name": "cotton candy",
"calories": 100
}
After sending the Create New Food request, send the Read All Food request again. The Postman results will display the newly added object.
Every object created with Mongoose is given its own _id
and you can use this to target specific items. It will be a mix of alphabetical characters and letters. For example: 5d1f6c3e4b0b88fb1d257237
.
Next, you will build the functionality to update an existing food item and save the changes to the database.
Revisit the foodRoutes.js
file and add the following lines of code between app.post
and module.exports
:
// ...
app.patch("/food/:id", async (request, response) => {
try {
await foodModel.findByIdAndUpdate(request.params.id, request.body);
await foodModel.save();
response.send(food);
} catch (error) {
response.status(500).send(error);
}
});
// ...
This code establishes a /food/:id
endpoint for PATCH requests. The Mongoose query function .findByIdAndUpdate()
takes the target’s id
and the request data you want to replace it with. Then, .save()
is used to save the changes.
In Postman, create a new request called Update Food. Ensure the request type is set to PATCH
. Set the request URL to localhost:3000/food/<id>
, where id
is the identifying string of the food you created previously.
In the Body section, select raw and JSON. Then, modify your food item by constructing a JSON object with a name
and calories
:
{
"calories": "999"
}
After sending the Update Food request, send the Read All Food request again. The Postman results will display the object with modified calories
.
Finally, you will build the functionality to remove an existing food item and save the changes to the database.
Revisit the foodRoutes.js
file and add the following lines of code between app.patch
and module.exports
:
// ...
app.delete("/food/:id", async (request, response) => {
try {
const food = await foodModel.findByIdAndDelete(request.params.id);
if (!food) response.status(404).send("No item found");
response.status(200).send();
} catch (error) {
response.status(500).send(error);
}
});
// ...
This code establishes a /food/:id
endpoint for DELETE requests. The Mongoose query function .findByIdAndDelete()
takes the target’s id
and removes it.
In Postman, create a new request called Delete Food. Ensure the request type is set to DELETE
. Set the request URL to localhost:3000/food/<id>
, where id
is the identifying string of the food you created previously.
After sending the Delete Food request, send the Read All Food request again. The Postman results will display an array without the item that was deleted.
Note: Now that you have completed this tutorial, you may wish to Terminate any MongoDB Atlas clusters that you are no longer using.
At this point, you have an Express server using Mongoose methods to interact with a MongoDB Atlas cluster.
In this article, you learned how to use Mongoose methods you can quickly make and manage your backend data.
If you’d like to learn more about Node.js, check out our Node.js topic page for exercises and programming projects.
If you’d like to learn more about MongoDB, check out our MongoDB topic page for exercises and programming projects.
]]>Server-Sent Events (SSE) is a technology based on HTTP. On the client-side, it provides an API called EventSource
(part of the HTML5 standard) that allows us to connect to the server and receive updates from it.
Before making the decision to use server-sent events, we must take into account two very important aspects:
If your project only receives something like stock prices or text information about something in progress it is a candidate for using Server-Sent Events instead of an alternative like WebSockets.
In this article, you will build a complete solution for both the backend and frontend to handle real-time information flowing from server to client. The server will be in charge of dispatching new updates to all connected clients and the web app will connect to the server, receive these updates and display them.
To follow through this tutorial, you’ll need:
This tutorial was verified with cURL v7.64.1, Node v15.3.0, npm
v7.4.0, express
v4.17.1, body-parser
v1.19.0, cors
v2.8.5, and react
v17.0.1.
In this section, you will create a new project directory. Inside of the project directory will be a subdirectory for the server. Later, you will also create a subdirectory for the client.
First, open your terminal and create a new project directory:
- mkdir node-sse-example
Navigate to the newly created project directory:
- cd node-sse-example
Next, create a new server directory:
- mkdir sse-server
Navigate to the newly created server directory:
- cd sse-server
Initialize a new npm
project:
- npm init -y
Install express
, body-parser
, and cors
:
- npm install express@4.17.1 body-parser@1.19.0 cors@2.8.5 --save
This completes setting up dependencies for the backend.
In this section, you will develop the backend of the application. It will need to support these features:
GET /events
endpoint to register for updatesPOST /facts
endpoint for new factsGET /status
endpoint to know how many clients have connectedcors
middleware to allow connections from the frontend appUse the first terminal session that is in the sse-server
directory. Create a new server.js
file:
Open the server.js
file in your code editor. Require the needed modules and initialize Express app:
const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const app = express();
app.use(cors());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false}));
app.get('/status', (request, response) => response.json({clients: clients.length}));
const PORT = 3000;
let clients = [];
let facts = [];
app.listen(PORT, () => {
console.log(`Facts Events service listening at http://localhost:${PORT}`)
})
Then, build the middleware for GET
requests to the /events
endpoint. Add the following lines of the code to server.js
:
// ...
function eventsHandler(request, response, next) {
const headers = {
'Content-Type': 'text/event-stream',
'Connection': 'keep-alive',
'Cache-Control': 'no-cache'
};
response.writeHead(200, headers);
const data = `data: ${JSON.stringify(facts)}\n\n`;
response.write(data);
const clientId = Date.now();
const newClient = {
id: clientId,
response
};
clients.push(newClient);
request.on('close', () => {
console.log(`${clientId} Connection closed`);
clients = clients.filter(client => client.id !== clientId);
});
}
app.get('/events', eventsHandler);
The eventsHandler
middleware receives the request
and response
objects that Express provides.
Headers are required to keep the connection open. The Content-Type
header is set to 'text/event-stream'
and the Connection
header is set to 'keep-alive'
. The Cache-Control
header is optional, set to 'no-cache'
. Additionally, the HTTP Status is set to 200
- the status code for a successful request.
After a client opens a connection, the facts
are turned into a string. Because this is a text-based transport you must stringify the array, also to fulfill the standard the message needs a specific format. This code declares a field called data
and sets to it the stringified array. The last detail of note is the double trailing newline \n\n
is mandatory to indicate the end of an event.
A clientId
is generated based on the timestamp and the response
Express object. These are saved to the clients
array. When a client
closes a connection, the array of clients
is updated to filter
out that client
.
Then, build the middleware for POST
requests to the /fact
endpoint. Add the following lines of the code to server.js
:
// ...
function sendEventsToAll(newFact) {
clients.forEach(client => client.response.write(`data: ${JSON.stringify(newFact)}\n\n`))
}
async function addFact(request, respsonse, next) {
const newFact = request.body;
facts.push(newFact);
respsonse.json(newFact)
return sendEventsToAll(newFact);
}
app.post('/fact', addFact);
The main goal of the server is to keep all clients connected and informed when new facts are added. The addNest
middleware saves the fact, returns it to the client which made POST
request, and invokes the sendEventsToAll
function.
sendEventsToAll
iterates the clients
array and uses the write
method of each Express response
object to send the update.
Before the web app implementation, you can test your server using cURL:
In a terminal window, navigate to the sse-server
directory in your project directory. And run the following command:
- node server.js
It will display the following message:
- OutputFacts Events service listening at http://localhost:3001
In a second terminal window, open a connection waiting for updates with the following command:
- curl -H Accept:text/event-stream http://localhost:3001/events
This will generate the following response:
- Outputdata: []
An empty array.
In a third terminal window, create a post POST request to add a new fact with the following command:
- curl -X POST \
- -H "Content-Type: application/json" \
- -d '{"info": "Shark teeth are embedded in the gums rather than directly affixed to the jaw, and are constantly replaced throughout life.", "source": "https://en.wikipedia.org/wiki/Shark"}'\
- -s http://localhost:3001/fact
After the POST
request, the second terminal window should update with the new fact:
- Outputdata: {"info": "Shark teeth are embedded in the gums rather than directly affixed to the jaw, and are constantly replaced throughout life.", "source": "https://en.wikipedia.org/wiki/Shark"}
Now the facts
array is populated with one item if you close the communication on the second tab and open it again:
- curl -H Accept:text/event-stream http://localhost:3001/events
Instead of an empty array, you should now receive a message with this new item:
- Outputdata: [{"info": "Shark teeth are embedded in the gums rather than directly affixed to the jaw, and are constantly replaced throughout life.", "source": "https://en.wikipedia.org/wiki/Shark"}]
At this point, the backend is fully functional. It is now time to implement the EventSource
API on the frontend.
In this part of our project, you will write a React app that uses the EventSource
API.
The web app will have the following set of features:
Now, open a new terminal window and navigate to the project directory. Use create-react-app
to generate a React App.
- npx create-react-app sse-client
Navigate to the newly created client directory:
- cd sse-client
Run the client application:
- npm start
This should open a new browser window with your new React application. This completes setting up dependencies for the frontend.
For styling, open the App.css
file in your code editor. And modify the contents with the following lines of code:
body {
color: #555;
font-size: 25px;
line-height: 1.5;
margin: 0 auto;
max-width: 50em;
padding: 4em 1em;
}
.stats-table {
border-collapse: collapse;
text-align: center;
width: 100%;
}
.stats-table tbody tr:hover {
background-color: #f5f5f5;
}
Then, open the App.js
file in your code editor. And modify the contents with the following lines of code:
import React, { useState, useEffect } from 'react';
import './App.css';
function App() {
const [ facts, setFacts ] = useState([]);
const [ listening, setListening ] = useState(false);
useEffect( () => {
if (!listening) {
const events = new EventSource('http://localhost:3001/events');
events.onmessage = (event) => {
const parsedData = JSON.parse(event.data);
setFacts((facts) => facts.concat(parsedData));
};
setListening(true);
}
}, [listening, facts]);
return (
<table className="stats-table">
<thead>
<tr>
<th>Fact</th>
<th>Source</th>
</tr>
</thead>
<tbody>
{
facts.map((fact, i) =>
<tr key={i}>
<td>{fact.info}</td>
<td>{fact.source}</td>
</tr>
)
}
</tbody>
</table>
);
}
export default App;
The useEffect
function argument contains the important parts: an EventSource
object with the /events
endpoint and an onmessage
method where the data
property of the event is parsed.
Unlike the cURL
response, you now have the event as an object. You can now take the data
property and parse it giving, as a result, a valid JSON object.
Finally, this code pushes the new fact to the list of facts, and the table gets re-rendered.
Now, try adding a new fact.
In a terminal window, run the following command:
- curl -X POST \
- -H "Content-Type: application/json" \
- -d '{"info": "Shark teeth are embedded in the gums rather than directly affixed to the jaw, and are constantly replaced throughout life.", "source": "https://en.wikipedia.org/wiki/Shark"}'\
- -s http://localhost:3001/fact
The POST
request added a new fact and all the connected clients should have received it. If you check the application in the browser you will have a new row with this information.
This article served as an introduction to server-sent events. In this article, you built a complete solution for both the backend and frontend to handle real-time information flowing from server to client.
SSE were designed for text-based and unidirectional transport. Here’s the current support for EventSource
in browsers.
Continue your learning by exploring all of the features available to EventSource
like retry
.
JSON Web Tokens (JWTs) supports authorization and information exchange.
One common use case is for allowing clients to preserve their session information after logging in. By storing the session information locally and passing it to the server for authentication when making requests, the server can trust that the client is a registered user.
Warning: Please be aware of the security risk of storing JWTs in localStorage.
In this article, you will learn about the applications of JWTs in a server-client relationship using Node.js and vanilla JavaScript.
Deploy your Node applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To follow along with this article, you will need the following installed on your machine:
jsonwebtoken
is an implementation of JSON Web Tokens.
You can add it to your JavaScript project by running the following command in your terminal:
- npm install jsonwebtoken
And import it into your files like so:
const jwt = require('jsonwebtoken');
To sign a token, you will need to have 3 pieces of information:
The token secret is a long random string used to encrypt and decrypt the data.
To generate this secret, one option is to use Node.js’s built-in crypto
library, like so:
> require('crypto').randomBytes(64).toString('hex')
// '09f26e402586e2faa8da4c98a35f1b20d6b033c6097befa8be3486a829587fe2f90a832bd3ff9d42710a4da095a2ce285b009f0c3730cd9b8e1af3eb84df6611'
Warning: Be careful! If your secret
is simple, the token verification process will be much easier to break by an unauthorized intruder.
Now, store this secret in your project’s .env
file:
TOKEN_SECRET=09f26e402586e2faa8da4c98a35f1b20d6b033c60...
To bring this token into a Node.js file and to use it, you have to use dotenv
:
- npm install dotenv
And import it into your files like so:
const dotenv = require('dotenv');
// get config vars
dotenv.config();
// access config var
process.env.TOKEN_SECRET;
The piece of data that you hash in your token can be something either a user ID or username or a much more complex object. In either case, it should be an identifier for a specific user.
The token expire time is a string, such as 1800 seconds (30 minutes), that details how long until the token will be invalid.
Here’s an example of a function for signing tokens:
function generateAccessToken(username) {
return jwt.sign(username, process.env.TOKEN_SECRET, { expiresIn: '1800s' });
}
This can be sent back from a request to sign in or log in a user:
app.post('/api/createNewUser', (req, res) => {
// ...
const token = generateAccessToken({ username: req.body.username });
res.json(token);
// ...
});
This example takes the username
value from the req
(request). And provides the token as the res
(response).
That concludes how jsonwebtoken
, crypto
, and dotenv
can be used to generate a JWT.
There are many ways to go about implementing a JWT authentication system in an Express.js application.
One approach is to utilize the middleware functionality in Express.js.
How it works is when a request is made to a specific route, you can have the (req, res)
variables sent to an intermediary function before the one specified in the app.get((req, res) => {})
.
The middleware is a function that takes parameters of (req, res, next)
.
req
is the sent request (GET, POST, DELETE, PUT, etc.).res
is the response that can be sent back to the user in a multitude of ways (res.sendStatus(200)
, res.json()
, etc.).next
is a function that can be called to move the execution past the piece of middleware and into the actual app.get
server response.Here is an example middleware function for authentication:
const jwt = require('jsonwebtoken');
function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization']
const token = authHeader && authHeader.split(' ')[1]
if (token == null) return res.sendStatus(401)
jwt.verify(token, process.env.TOKEN_SECRET as string, (err: any, user: any) => {
console.log(err)
if (err) return res.sendStatus(403)
req.user = user
next()
})
}
An example request using this middleware function would resemble something like this:
GET https://example.com:4000/api/userOrders
Authorization: Bearer JWT_ACCESS_TOKEN
And an example of a request that would use that piece of middleware would resemble something like this:
app.get('/api/userOrders', authenticateToken, (req, res) => {
// executes after authenticateToken
// ...
})
This code will authenticate the token provided by the client. If it is valid, it can proceed to the request. If it is not valid, it can be handled as an error.
When the client receives the token, they often want to store it for gathering user information in future requests.
The most popular manner for storing auth tokens is in an HttpOnly
cookie.
Here’s an implementation for storing a cookie using client-side JavaScript code:
// get token from fetch request
const token = await res.json();
// set token in cookie
document.cookie = `token=${token}`
This approach stores the response locally where they can be referenced for future requests to the server.
That concludes the flow of requesting a token, generating a token, receiving a token, passing a token with new requests, and verifying a token.
In this article, you were introduced to JWTs and one approach to applying them to a Node.js application. This approach relied upon a combination of jsonwebtoken
, crypto
, dotenv
, and express
.
For another approach to using JWTs, there is How To Implement API Authentication with JSON Web Tokens and Passport.
For more background on JWTs, there is the “Introduction” documentation.
If you’d like to learn more about Node.js, check out our Node.js topic page for exercises and programming projects.
]]>WebSocket is the internet protocol that allows for full-duplex communication between a server and clients. This protocol goes beyond the typical HTTP request and response paradigm. With WebSockets, the server may send data to a client without the client initiating a request, thus allowing for some very interesting applications.
In this tutorial, you will build a real-time document collaboration application (similar to Google Docs). We’ll be using the Socket.IO Node.js server framework and Angular 7 to accomplish this.
You can find the complete source code for this example project on GitHub.
To complete this tutorial, you will need:
This tutorial was originally written in an environment consisting of Node.js v8.11.4, npm v6.4.1, and Angular v7.0.4.
This tutorial was verified with Node v14.6.0, npm v6.14.7, Angular v10.0.5, and Socket.IO v2.3.0.
First, open your terminal and create a new project directory that will hold both our server and client code:
- mkdir socket-example
Next, change into the project directory:
- cd socket-example
Then, create a new directory for the server code:
- mkdir socket-server
Next, change into the server directory.
- cd socket-server
Then, initialize a new npm
project:
- npm init -y
Now, we will install our package dependencies:
- npm install express@4.17.1 socket.io@2.3.0 @types/socket.io@2.1.10 --save
These packages include Express, Socket.IO, and @types/socket.io
.
Now that you have completed setting up the project, you can move on to writing code for the server.
First, create a new src
directory:
- mkdir src
Now, create a new file called app.js
in the src
directory, and open it using your favorite text editor:
- nano src/app.js
Start with the require
statements for Express and Socket.IO:
const app = require('express')();
const http = require('http').Server(app);
const io = require('socket.io')(http);
As you can tell, we’re using Express and Socket.IO to set up our server. Socket.IO provides a layer of abstraction over native WebSockets. It comes with some nice features, such as a fallback mechanism for older browsers that do not support WebSockets, and the ability to create rooms. We’ll see this in action in a minute.
For the purposes of our real-time document collaboration application, we will need a way to store documents
. In a production setting, you would want to use a database, but for the scope of this tutorial, we will use an in-memory store of documents
:
const documents = {};
Now, let’s define what we want our socket server to actually do:
io.on("connection", socket => {
// ...
});
Let’s break this down. .on('...')
is an event listener. The first parameter is the name of the event, and the second one is usually a callback executed when the event fires, with the event payload.
The first example we see is when a client connects to the socket server (connection
is a reserved event type in Socket.IO).
We get a socket
variable to pass to our callback to initiate communication to either that one socket or to multiple sockets (i.e., broadcasting).
safeJoin
We will set up a local function (safeJoin
) that takes care of joining and leaving rooms:
io.on("connection", socket => {
let previousId;
const safeJoin = currentId => {
socket.leave(previousId);
socket.join(currentId, () => console.log(`Socket ${socket.id} joined room ${currentId}`));
previousId = currentId;
};
// ...
});
In this case, when a client has joined a room, they are editing a particular document. So if multiple clients are in the same room, they are all editing the same document.
Technically, a socket can be in multiple rooms, but we don’t want to let one client edit multiple documents at the same time, so if they switch documents, we need to leave the previous room and join the new room. This little function takes care of that.
There are three event types that our socket is listening for from the client:
getDoc
addDoc
editDoc
And two event types that are emitted by our socket to the client:
document
documents
getDoc
Let’s work on the first event type - getDoc
:
io.on("connection", socket => {
// ...
socket.on("getDoc", docId => {
safeJoin(docId);
socket.emit("document", documents[docId]);
});
// ...
});
When the client emits the getDoc
event, the socket is going to take the payload (in our case, it’s just an id), join a room with that docId
, and emit the stored document
back to the initiating client only. That’s where socket.emit('document', ...)
comes into play.
addDoc
Let’s work on the second event type - addDoc
:
io.on("connection", socket => {
// ...
socket.on("addDoc", doc => {
documents[doc.id] = doc;
safeJoin(doc.id);
io.emit("documents", Object.keys(documents));
socket.emit("document", doc);
});
// ...
});
With the addDoc
event, the payload is a document
object, which, at the moment, consists only of an id generated by the client. We tell our socket to join the room of that ID so that any future edits can be broadcast to anyone in the same room.
Next, we want everyone connected to our server to know that there is a new document to work with, so we broadcast to all clients with the io.emit('documents', ...)
function.
Note the difference between socket.emit()
and io.emit()
- the socket
version is for emitting back to only initiating the client, the io
version is for emitting to everyone connected to our server.
editDoc
Let’s work on the third event type - editDoc
:
io.on("connection", socket => {
// ...
socket.on("editDoc", doc => {
documents[doc.id] = doc;
socket.to(doc.id).emit("document", doc);
});
// ...
});
With the editDoc
event, the payload will be the whole document at its state after any keystroke. We’ll replace the existing document in the database and then broadcast the new document to only the clients that are currently viewing that document. We do this by calling socket.to(doc.id).emit(document, doc)
, which emits to all sockets in that particular room.
Finally, whenever a new connection is made, we broadcast to all the clients to ensure the new connection receives the latest document changes when they connect:
io.on("connection", socket => {
// ...
io.emit("documents", Object.keys(documents));
console.log(`Socket ${socket.id} has connected`);
});
After the socket functions are all set up, pick a port and listen on it:
http.listen(4444, () => {
console.log('Listening on port 4444');
});
Run the following command in your terminal to start the server:
- node src/app.js
We now have a fully-functioning socket server for document collaboration!
@angular/cli
and Creating the Client AppOpen a new terminal window and navigate to the project directory.
Run the following commands to install the Angular CLI as a devDependency
:
- npm install @angular/cli@10.0.4 --save-dev
Now, use the @angular/cli
command to create a new Angular project, with no Angular Routing and with SCSS for styling:
- ng new socket-app --routing=false --style=scss
Then, change into the server directory:
- cd socket-app
Now, we will install our package dependencies:
- npm install ngx-socket-io@3.2.0 --save
ngx-socket-io
is an Angular wrapper over Socket.IO client libraries.
Then, use the @angular/cli
command to generate a document
model, a document-list
component, a document
component, and a document
service:
- ng generate class models/document --type=model
- ng generate component components/document-list
- ng generate component components/document
- ng generate service services/document
Now that you have completed setting up the project, you can move on to writing code for the client.
Open app.modules.ts
:
- nano src/app/app.module.ts
And import FormsModule
, SocketioModule
, and SocketioConfig
:
// ... other imports
import { FormsModule } from '@angular/forms';
import { SocketIoModule, SocketIoConfig } from 'ngx-socket-io';
And before your @NgModule
declaration, define config
:
const config: SocketIoConfig = { url: 'http://localhost:4444', options: {} };
You’ll notice that this is the port number that we declared earlier in the server’s app.js
.
Now, add to your imports
array, so it looks like:
@NgModule({
// ...
imports: [
// ...
FormsModule,
SocketIoModule.forRoot(config)
],
// ...
})
This will fire off the connection to our socket server as soon as AppModule
loads.
Open document.model.ts
:
- nano src/app/models/document.model.ts
And define id
and doc
:
export class Document {
id: string;
doc: string;
}
Open document.service.ts
:
- nano src/app/services/document.service.ts
And add the following in the class definition:
import { Injectable } from '@angular/core';
import { Socket } from 'ngx-socket-io';
import { Document } from 'src/app/models/document.model';
@Injectable({
providedIn: 'root'
})
export class DocumentService {
currentDocument = this.socket.fromEvent<Document>('document');
documents = this.socket.fromEvent<string[]>('documents');
constructor(private socket: Socket) { }
getDocument(id: string) {
this.socket.emit('getDoc', id);
}
newDocument() {
this.socket.emit('addDoc', { id: this.docId(), doc: '' });
}
editDocument(document: Document) {
this.socket.emit('editDoc', document);
}
private docId() {
let text = '';
const possible = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
for (let i = 0; i < 5; i++) {
text += possible.charAt(Math.floor(Math.random() * possible.length));
}
return text;
}
}
The methods here represent each emit the three event types that the socket server is listening for. The properties currentDocument
and documents
represent the events emitted by the socket server, which is consumed on the client as an Observable
.
You may notice a call to this.docId()
. This is a little private method that generates a random string to assign as the document id.
Let’s put the list of documents in a sidenav. Right now, it’s only showing the docId
- a random string of characters.
Open document-list.component.html
:
- nano src/app/components/document-list/document-list.component.html
And replace the contents with the following:
<div class='sidenav'>
<span
(click)='newDoc()'
>
New Document
</span>
<span
[class.selected]='docId === currentDoc'
(click)='loadDoc(docId)'
*ngFor='let docId of documents | async'
>
{{ docId }}
</span>
</div>
Open document-list.component.scss
:
- nano src/app/components/document-list/document-list.component.scss
And add some styles:
.sidenav {
background-color: #111111;
height: 100%;
left: 0;
overflow-x: hidden;
padding-top: 20px;
position: fixed;
top: 0;
width: 220px;
span {
color: #818181;
display: block;
font-family: 'Roboto', Tahoma, Geneva, Verdana, sans-serif;
font-size: 25px;
padding: 6px 8px 6px 16px;
text-decoration: none;
&.selected {
color: #e1e1e1;
}
&:hover {
color: #f1f1f1;
cursor: pointer;
}
}
}
Open document-list.component.ts
:
- nano src/app/components/document-list/document-list.component.ts
And add the following in the class definition:
import { Component, OnInit, OnDestroy } from '@angular/core';
import { Observable, Subscription } from 'rxjs';
import { DocumentService } from 'src/app/services/document.service';
@Component({
selector: 'app-document-list',
templateUrl: './document-list.component.html',
styleUrls: ['./document-list.component.scss']
})
export class DocumentListComponent implements OnInit, OnDestroy {
documents: Observable<string[]>;
currentDoc: string;
private _docSub: Subscription;
constructor(private documentService: DocumentService) { }
ngOnInit() {
this.documents = this.documentService.documents;
this._docSub = this.documentService.currentDocument.subscribe(doc => this.currentDoc = doc.id);
}
ngOnDestroy() {
this._docSub.unsubscribe();
}
loadDoc(id: string) {
this.documentService.getDocument(id);
}
newDoc() {
this.documentService.newDocument();
}
}
Let’s start with the properties. documents
will be a stream of all available documents. currentDocId
is the id of the currently selected document. The document list needs to know what document we’re on, so we can highlight that doc id in the sidenav. _docSub
is a reference to the Subscription
that gives us the current or selected doc. We need this so we can unsubscribe in the ngOnDestroy
lifecycle method.
You’ll notice the methods loadDoc()
and newDoc()
don’t return or assign anything. Remember, these fire off events to the socket server, which turns around and fires an event back to our Observables. The returned values for getting an existing document or adding a new document are realized from the Observable
patterns above.
This will be the document editing surface.
Open document.component.html
:
- nano src/app/components/document/document.component.html
And replace the contents with the following:
<textarea
[(ngModel)]='document.doc'
(keyup)='editDoc()'
placeholder='Start typing...'
></textarea>
Open document.component.scss
:
- nano src/app/components/document/document.component.scss
And change some styles on the default HTML textarea
:
textarea {
border: none;
font-size: 18pt;
height: 100%;
padding: 20px 0 20px 15px;
position: fixed;
resize: none;
right: 0;
top: 0;
width: calc(100% - 235px);
}
Open document.component.ts
:
- src/app/components/document/document.component.ts
And add the following in the class definition:
import { Component, OnInit, OnDestroy } from '@angular/core';
import { Subscription } from 'rxjs';
import { startWith } from 'rxjs/operators';
import { Document } from 'src/app/models/document.model';
import { DocumentService } from 'src/app/services/document.service';
@Component({
selector: 'app-document',
templateUrl: './document.component.html',
styleUrls: ['./document.component.scss']
})
export class DocumentComponent implements OnInit, OnDestroy {
document: Document;
private _docSub: Subscription;
constructor(private documentService: DocumentService) { }
ngOnInit() {
this._docSub = this.documentService.currentDocument.pipe(
startWith({ id: '', doc: 'Select an existing document or create a new one to get started' })
).subscribe(document => this.document = document);
}
ngOnDestroy() {
this._docSub.unsubscribe();
}
editDoc() {
this.documentService.editDocument(this.document);
}
}
Similar to the pattern we used in the DocumentListComponent
above, we’re going to subscribe to the changes for our current document, and fire off an event to the socket server whenever we change the current document. This means that we will see all the changes if any other client is editing the same document we are, and vice versa. We use the RxJS startWith
operator to give a little message to our users when they first open the app.
Open app.component.html
:
- nano src/app.component.html
And compose the two custom components by replacing the contents with the following:
<app-document-list></app-document-list>
<app-document></app-document>
With our socket server still running in a terminal window, let’s open a new terminal window and start our Angular app:
- ng serve
Open more than one instance of http://localhost:4200
in separate browser tabs and see it in action.
Now, you can create new documents and see them update in both browser windows. You can make a change in one browser window and see the change reflected in the other browser window.
In this tutorial, you have completed an initial exploration into using WebSocket. You used it to build a real-time document collaboration application. It supports multiple browser sessions to connect to a server and update and modify multiple documents.
If you’d like to learn more about Angular, check out our Angular topic page for exercises and programming projects.
If you’d like to learn more about Socket.IO, check out Integrating Vue.js and Socket.IO.
Further WebSocket projects include real-time chat applications. See How To Build a Realtime Chat App with React and GraphQL.
]]>Node.jsでは、変更を有効にするためにプロセスを再起動する必要があります。これにより、変更を行うためにワークフローに更なる手順が追加されます。nodemon
を使用してプロセスを自動的に再起動することで、この余分な手順を排除できます。
nodemon
は@remによって開発されたコマンドラインインターフェイス(CLI)ユーティリティであり、Nodeアプリケーションをラップ、ファイルシステムを監視し、プロセスを自動的に再起動します。
この記事では、nodemon
のインストール、セットアップ、設定について学びます。
この記事に沿って進めるには、次のものが必要です。
nodemon
をインストールまず、マシンにnodemon
をインストールする必要があります。npmまたはYarnを使用して、ユーティリティをプロジェクトに、グローバルまたはローカルインストールします。
npm
を使用してnodemon
をグローバルにインストールすることができます。
- npm install nodemon -g
または、yarnを使用して次のように行います。
- yarn global add nodemon
npmを使用してnodemon
をローカルにインストールすることもできます。ローカルインストールを実行する場合、--save-dev
(または-dev
)を使用してnodemon
を開発の依存関係としてインストールできます。
- npm install nodemon --save-dev
または、yarnを使用して次のように行います。
- yarn add nodemon --dev
ローカルインストールでは、コマンドラインから直接nodemon
コマンドを使用できないことに注意してください。
- Outputcommand not found: nodemon
ただし、いくつかのnpmスクリプトの一部として、またはnpxとともに使用することができます。
これで、nodemon
インストールプロセスは終了です。次に、プロジェクトでnodemon
を使用します。
nodemon
を使用したサンプルExpressプロジェクトのセットアップnodemon
を使用してNodeスクリプトを実行することができます。たとえば、server.js
ファイルにExpressサーバーのセットアップがある場合、それを実行して次のように変更を監視することができます。
- nodemon server.js
Nodeを使用してスクリプトを実行するかのように、引数を渡すことができます。
- nodemon server.js 3006
現在のディレクトリまたはサブディレクトリにあるデフォルトの監視拡張子(.js
、.mjs
、.json
、.coffee
、または.litcoffee
)のいずれかを持つファイルに変更を加えるたびに、プロセスは再起動します。
Dolphin app listening on port ${port}!
というメッセージを出力する、server.js
サンプルファイルを記述すると仮定しましょう。
nodemon
を使用してサンプルを実行することができます。
- nodemon server.js
次のように端末に出力されます。
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
nodemon
が実行中の状態で、server.js
ファイルを変更して次のメッセージを出力しましょう。Shark app listening on port ${port}!
次のように端末に追加出力されます。
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
Node.jsアプリケーションからの端末出力は、期待通りに表示されています。rs
と入力してENTER
キーを押すと、いつでもプロセスを再起動することができます。
代わりに、nodemon
は、プロジェクトのpackage.json
ファイルで指定されているmain
ファイルも検索します。
{
// ...
"main": "server.js",
// ...
}
または、次のようにstart
スクリプトを実行します。
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
package.json
に変更を加えると、server.js
に渡す必要なく、nodemon
を呼び出して監視モードでサンプルアプリを起動することができます。
nodemon
で使用可能な設定を変更することができます。
主なオプションをいくつか見ていきましょう。
--exec
:--exec
スイッチを使用して、ファイルを実行するバイナリを指定します。たとえば、ts-node
バイナリと組み合わせた場合、--exec
は変更を監視してTypeScriptファイルを実行するのに役立ちます。--ext
:監視するさまざまなファイル拡張子を指定します。このスイッチでは、ファイル拡張子のカンマ区切りリストを指定します(例:--ext js,ts
)。--delay
:デフォルトでは、nodemon
は、ファイルが変更されたときにプロセスを再起動するまで1秒間待機しますが、--delay
スイッチを使用するとさまざまな遅延を指定することができます。たとえば、nodemon --delay 3.2
で、3.2秒の遅延を指定します。--watch
:--watch
スイッチを使用して、監視する複数のディレクトリまたはファイルを指定します。監視したいディレクトリごとに、--watch
スイッチを1つ追加します。デフォルトでは、現在のディレクトリとそのサブディレクトリが監視されるため、--watch
を使用すると、特定のサブディレクトリまたはファイルのみに絞り込むことができます。--ignore
:--ignore
スイッチを使用して、特定のファイル、ファイルパターン、またはディレクトリを無視します。--verbose
:再起動がトリガーされた原因となる、変更されたファイルに関する情報を含む詳細な出力。次のコマンドを使用して、使用可能なすべてのオプションを表示することができます。
- nodemon --help
これらのオプションを使用して、次のシナリオを満たすために コマンドを作成しましょう。
server
ディレクトリを監視.ts
拡張子の付いたファイルを指定.test.ts
接尾辞の付いたファイルを無視ts-node
を使用してファイル(server/server.ts
)を実行- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
このコマンドは、--watch
、--ext
、--exec
、--ignore
、--delay
オプションを組み合わせて、このシナリオの条件を満たします。
前の例で、nodemon
の実行時に設定スイッチを追加するのは、非常に面倒な作業です。特定の設定が必要なプロジェクトに適した解決策は、nodemon.json
ファイルでこれらの設定を指定することです。
たとえば、前のコマンドラインの例と同じ設定ですが、nodemon.json
ファイルに配置されています。
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
--exec
スイッチではなく、execMap
を使用していることに注意してください。execMap
を使用すると、特定のファイル拡張子を指定して使用するバイナリを指定できます。
または、nodemon.json
設定ファイルをプロジェクトに追加したくない場合は、nodemonConfig
キーの中でpackage.json
ファイルにこれらの設定を追加することができます。
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
nodemon.json
またはpackage.json
に変更を加えたら、目的のスクリプトを使用してnodemon
を実行することができます。
- nodemon server/server.ts
nodemon
は、設定を取得して使用します。このように、設定を保存、共有、繰り返して、コマンドラインでのコピー&ペーストや入力エラーを回避することができます。
この記事では、Node.jsアプリケーションでnodemon
を使用する方法を見てきました。このツールを使用すると、変更を表示するためにNodeサーバーを停止および起動するプロセスを自動化するのに役立ちます。
使用可能な機能とエラーのトラブルシューティングの詳細については、公式ドキュメントを参照してください。
Node.jsの詳細については、Node.jsトピックページで演習とプログラミングプロジェクトをご覧ください。
]]>Nodeは、サーバーサイドJavaScriptの記述を可能にするランタイム環境です。2011年のリリース以来、広く採用されています。サーバーサイドJavaScriptの記述は、JavaScript言語の性質上、動的で弱い型付けなので、コードベースが肥大化していき、困難な作業です。
他の言語をからJavaScriptを始める開発者は時に、静的で強い型付けに欠けている面に不満を感じますが、TypeScriptはそのギャップを埋めてくれます。
TypeScriptは、大規模なJavaScriptプロジェクトの構築・管理に役立つ型付け(オプション)スーパーセットです。静的で強い型付け、コンパイル、オブジェクト指向プログラミングなどの追加機能を持つJavaScriptとして考えられます。
注: TypeScriptは、技術的にはJavaScriptのスーパーセットです。つまりJavaScriptコードはすべて、有効なTypeScriptコードであるといえます。
以下にTypeScriptを使用するメリットを挙げます。
このチュートリアルでは、NodeプロジェクトとTypeScriptをセットアップします。TypeScriptを使用してExpressアプリケーションを構築し、それを整然とした信頼性の高いJavaScriptコードにトランスパイルします。
このガイドを始める前に、Node.jsをマシンにインストールします。ガイドNode.jsをインストールしてローカル開発環境を整えるを参照して、これを実施してください。
まず、node_project
という新しいフォルダを作成し、そのディレクトリに移動します。
- mkdir node_project
- cd node_project
次に、npmプロジェクトとしてそのフォルダを初期化します。
- npm init
npm init
を実行した後、プロジェクトに関する情報をnpmに提供します。npmに妥当なデフォルトを推論させたければ、y
フラグを付けて追加情報のプロンプトをスキップできます。
- npm init -y
プロジェクトスペースがセットアップされ、必要な依存関係をインストールする準備が整いました。
最小限のnpmプロジェクトが初期化されると、次のステップは、TypeScriptの実行に必要な依存関係のインストールです。
プロジェクトディレクトリから次のコマンドを実行して、依存関係をインストールします。
- npm install -D typescript@3.3.3
- npm install -D tslint@5.12.1
-D
フラグは、 --save-dev
のショートカットです。このフラグの詳細についてはnpmjsドキュメントを参照してください。
それではExpressフレームワークをインストールしましょう。
- npm install -S express@4.16.4
- npm install -D @types/express@4.16.1
2つ目のコマンドは、TypeScriptをサポートするExpressのタイプをインストールします。TypeScriptのタイプはファイルで、通常は.d.ts
という拡張子が付きます。これらのファイルはAPIに関するタイプ情報(この場合はExpressフレームワーク)の提供に使用します。
TypeScriptと Expressは独立したパッケージなので、このパッケージが必要です。@type/express
パッケージがなければ、TypeScriptがExpressクラスのタイプを知る方法はありません。
このセクションでは、TypeScriptをセットアップしてTypeScriptのLintチェックを設定します。TypeScriptは、tsconfig.json
というファイルを使用して、プロジェクトのコンパイラオプションを設定します。プロジェクトのrootディレクトリにtsconfig.json
ファイルを作成し、次のスニペットを貼り付けます。
{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist"
},
"lib": ["es2015"]
}
上記JSONスニペットの重要部分を見ていきしょう。
module
: モジュールコード生成方法を指定します。Nodeはcommonjs
を使用します。target
:出力言語レベルを指定します。moduleResolution
: インポートが参照するものをコンパイラに理解させます。値node
は、Node module resolution機構を模倣します。outDir
: トランスパイル後の.js
ファイルの出力先です。このチュートリアルでは、dist
として保存します。tsconfig.json
ファイルを手動で作成、記入する代わりに次のコマンドを実行することもできます。
- tsc --init
このコマンドは、コメント付きのtsconfig.json
ファイルを生成します。
利用可能なキー値オプションの詳細について詳しく知るには、公式TypeScriptドキュメントを参照してください。
これで、プロジェクトのTypeScript Lintチェックを設定できます。プロジェクトのrootディレクトリで稼働しているターミナル、つまりこのチュートリアルがnode_project
として確立したターミナルで、次のコマンドを実行してtslint.json
ファイルを生成します。
- ./node_modules/.bin/tslint --init
新たに生成されたtslint.json
ファイルを開いて、no-console
ルールを追加します。
{
"defaultSeverity": "error",
"extends": ["tslint:recommended"],
"jsRules": {},
"rules": {
"no-console": false
},
"rulesDirectory": []
}
TypeScript Lintツールはデフォルトでコンソール
ステートメントを使用したデバッグを妨げるため、Lintツールにデフォルトのno-console
ルールを無効にするよう明示的に指示する必要があります。
package.json
ファイルの更新この時点で、ターミナルで関数を個別に実行するか、npmスクリプトを作成して実行することができます。
このステップでは、TypeScriptコードをコンパイルしてトランスパイルするstart
スクリプトを作成し、結果の.js
アプリケーションを実行します。
package.json
ファイルを開き、適宜更新します。
{
"name": "node-with-ts",
"version": "1.0.0",
"description": "",
"main": "dist/app.js",
"scripts": {
"start": "tsc && node dist/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@types/express": "^4.16.1",
"tslint": "^5.12.1",
"typescript": "^3.3.3"
},
"dependencies": {
"express": "^4.16.4"
}
}
上記のスニペットでは、main
パスを更新し、start
コマンドをscriptsセクションに追加しています。start
コマンドを見ると、最初にtsc
コマンドが、続いてnode
コマンドが実行されるのがわかります。このコマンドは、node
で生成された出力をコンパイルし、実行します。
tsc
コマンドは、tsconfig.json
ファイルの設定通り、アプリケーションをコンパイルし、生成された.js
出力を指定したoutDir
ディレクトリに配置するよう、TypeScriptに指示します。
TypeScriptとそのlintツールが設定されたので、次はNode Expressサーバーを構築しましょう。
まず、プロジェクトのrootディレクトリにsrc
フォルダを作成します。
- mkdir src
次に、そのフォルダ内にapp.ts
という名前のファイルを作成します。
- touch src/app.ts
この時点で、フォルダ構造は次のように見えるはずです。
├── node_modules/
├── src/
├── app.ts
├── package-lock.json
├── package.json
├── tsconfig.json
├── tslint.json
app.ts
ファイルを任意のテキストエディタで開き、次のコードスニペットを貼り付けます。
import express from 'express';
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
上記のコードは、ポート3000
でリクエストをlistenするノードサーバーを作成します。次のコマンドを使用してアプリケーションを実行します。
- npm start
実行が成功すると、メッセージがターミナルに表示されます。
- Outputserver is listening on 3000
これで、ブラウザでhttp://localhost:3000
にアクセスできます。次のメッセージが表示されます。
- OutputThe sedulous hyena ate the antelope!
dist/app.js
ファイルを開くと、TypeScriptコードのトランスパイル版が表示されます。
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const app = express_1.default();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
//# sourceMappingURL=app.js.map
この時点で、ノードプロジェクトがTypeScriptを使用するように、正常にセットアップされました。
このチュートリアルでは、TypeScriptが信頼性の高いJavaScriptのコードを作成するのに役立つ理由について学びました。さらに、TypeScriptで作業するメリットについても学びました。
最後に、NodeプロジェクトのセットアップにはExpressフレームワークを使用しましたが、プロジェクトのコンパイル・実行にはTypeScriptを使用しました。
]]>ノードアプリケーションを即急に作成する際、アプリケーションを手軽にテンプレート化する必要が時としてあります。
Jadeは、Expressがデフォルトとして使用するビューエンジンですが、Jade構文は多くのユースケースにとってあまりに複雑です。EJSはその代用としてよく機能し、セットアップが非常に簡単です。簡単なアプリケーションを作成し、EJSを使用してサイトの繰り返し部分(パーシャル)をインクルードし、データをビューに渡す方法を見てみましょう。
アプリケーション用に2ページ作成します。1ページはfull width、もう1ページはsidebarにします。
アプリケーションに必要なファイルは以下のとおりです。viewsフォルダ内でテンプレートを作成します。その他はごく標準的なノードのフォルダです。
- views
----- partials
---------- footer.ejs
---------- head.ejs
---------- header.ejs
----- pages
---------- index.ejs
---------- about.ejs
- package.json
- server.js
package.json
にはノードアプリケーション情報と必要な依存関係(expressとEJS)が、 server.js
にはExpressサーバーのセットアップと設定が保持されています。ここでは、ページへのルートを定義します。
package.json
ファイルを見て、プロジェクトをセットアップしましょう。
{
"name": "node-ejs",
"main": "server.js",
"dependencies": {
"ejs": "^3.1.5",
"express": "^4.17.1"
}
}
必要なのはExpressとEJSだけです。ここで、定義したばかりの依存関係をインストールします。先に進み実行します。
- npm install
すべての依存関係がインストールされたので、EJSを使用するようにアプリケーションを設定し、必要な2ページ、つまりindexページ(full width)とaboutページ(sidebar)へのルートをセットアップします。これらをすべてserver.js
ファイルで行います。
// load the things we need
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// use res.render to load up an ejs view file
// index page
app.get('/', function(req, res) {
res.render('pages/index');
});
// about page
app.get('/about', function(req, res) {
res.render('pages/about');
});
app.listen(8080);
console.log('8080 is the magic port');
ここではアプリケーションを定義し、ポート8080で表示するように設定します。また、app.set('view engine', 'ejs');
を使用して、EJSをExpressアプリケーションのビューエンジンとしてセットアップします。res.render()
をどのように使用してビューをユーザーに送信するかに注目してください。注目すべき点として、ビューを求めてres.render()はviewsフォルダを検索します。したがって、フルパスがviews/pages/index
なので、pages/index
とだけ定義します。
先に進み、次を入力してサーバーを起動します。
- node server.js
これで、アプリケーションがブラウザ上のhttp://localhost:8080
とhttp://localhost:8080/about
で表示されます。アプリケーションがセットアップされたので、ビューファイルを定義し、EJSがどのように機能するか確認します。
多くのアプリケーションと同様に、構築時に再利用されるコードがたくさんあります。これらのコードをパーシャルと呼び、サイト全体で使用する3ファイル、head.ejs
、header.ejs
、footer.ejs
を定義します。早速これらのファイルを作成しましょう。
<meta charset="UTF-8">
<title>EJS Is Fun</title>
<!-- CSS (load bootstrap from a CDN) -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.2/css/bootstrap.min.css">
<style>
body { padding-top:50px; }
</style>
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="/">EJS Is Fun</a>
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="nav-link" href="/">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/about">About</a>
</li>
</ul>
</nav>
<p class="text-center text-muted">© Copyright 2020 The Awesome People</p>
ここにパーシャルが定義されました。あとはビューに追加するだけです。index.ejs
とabout.ejs
に移動し、include
構文を使用してパーシャルを追加します。
EJSパーシャルを別のファイルに埋め込むには<%- include('RELATIVE/PATH/TO/FILE') %>
を使用します。
<%
ではなくハイフンを付けて<%-
とするのは、EJSに生のHTMLをレンダリングするように指示するためです。<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
これで、定義されたビューをブラウザのhttp://localhost:8080
で見ることができます。
aboutページにはブートストラップサイドバーも追加し、パーシャルがどのように構築され、さまざまなテンプレートやページで再利用されるのかを見ていきます。
<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="row">
<div class="col-sm-8">
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
</div>
</div>
<div class="col-sm-4">
<div class="well">
<h3>Look I'm A Sidebar!</h3>
</div>
</div>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
http://localhost:8080/about
を表示すると、サイドバー付きのaboutページが確認できます。
これで、EJSを使用してノードアプリケーションからビューにデータを渡せます。
基本的な変数と一覧を定義してホームページに渡しましょう。server.js
ファイルに戻り、app.get('/')
ルート内に次を追加します。
// index page
app.get('/', function(req, res) {
var mascots = [
{ name: 'Sammy', organization: "DigitalOcean", birth_year: 2012},
{ name: 'Tux', organization: "Linux", birth_year: 1996},
{ name: 'Moby Dock', organization: "Docker", birth_year: 2013}
];
var tagline = "No programming concept is complete without a cute animal mascot.";
res.render('pages/index', {
mascots: mascots,
tagline: tagline
});
});
mascot
という一覧とtagline
という簡単な文字列を作成しました。index.ejs
ファイルに移動してこれらを使ってみましょう。
変数を1つだけechoするために使うのは、<%= tagline %>
だけです。これをindex.ejsファイルに追加しましょう。
...
<h2>Variable</h2>
<p><%= tagline %></p>
...
データをループさせるには、.forEach
を使用します。これをビューファイルに追加しましょう。
...
<ul>
<% mascots.forEach(function(mascot) { %>
<li>
<strong><%= mascot.name %></strong>
representing <%= mascot.organization %>, born <%= mascot.birth_year %>
</li>
<% }); %>
</ul>
...
追加した新しい情報がブラウザで表示されています。
EJSパーシャルは、親ビューと全く同じデータにアクセスできます。ただし、パーシャルで変数を参照している場合、パーシャルを使用するすべてのビューでそれを定義する必要があるので注意してください。さもないとエラーになります。
次のようなinclude構文でもEJSパーシャルに変数を定義したり渡したりできます。
...
<header>
<%- include('../partials/header', {variant:'compact'}); %>
</header>
...
しかしここでも、変数が定義されたと仮定することには注意が必要です。
必ずしも定義されていないかもしれないパーシャルの変数を参照し、初期値を付与したい場合、次のように入力します。
...
<em>Variant: <%= typeof variant != 'undefined' ? variant : 'default' %></em>
...
上記では、EJSコードは、変数
が定義されていればその値を、定義されていなければデフォルト値
をレンダリングします。
EJSは、そこまで複雑なものでなければ、アプリケーションのスピード開発に役立ちます。パーシャルを使用して変数を簡単にビューに渡せるので、大きなアプリケーションでも手軽に構築できます。
EJSの詳細については、公式ドキュメントを参照してください。
]]>Dalam Node.js, Anda perlu memulai ulang proses untuk menerapkan perubahan. Ini menambah langkah ekstra ke alur kerja Anda demi menerapkan perubahan. Anda dapat menghilangkan langkah ekstra ini dengan nodemon
untuk memulai ulang prosesnya secara otomatis.
nodemon
adalah antarmuka baris-perintah (CLI) yang dikembangkan oleh @rem yang mengemas aplikasi Node Anda, memantau sistem berkas, dan secara otomatis memulai ulang proses.
Dalam artikel ini, Anda akan mempelajari tentang menginstal, menyiapkan, dan mengonfigurasi nodemon
.
Jika Anda ingin mengikuti artikel ini, Anda membutuhkan:
nodemon
Pertama, Anda perlu menginstal nodemon
di mesin Anda. Instal utilitas secara global maupun secara lokal pada proyek Anda menggunakan npm atau Yarn:
Anda dapat menginstal nodemon
secara global dengan npm
:
- npm install nodemon -g
Atau dengan Yarn:
- yarn global add nodemon
Anda juga dapat menginstal nodemon
secara lokal dengan npm. Ketika melakukan instalasi lokal, kita dapat menginstal nodemon
sebagai dependensi dev dengan --save-dev
(atau --dev
):
- npm install nodemon --save-dev
Atau dengan Yarn:
- yarn add nodemon --dev
Satu hal yang harus disadari dengan instalasi lokal adalah Anda tidak akan dapat menggunakan perintah nodemon
secara langsung dari baris perintah:
- Outputcommand not found: nodemon
Namun, Anda dapat menggunakannya sebagai bagian dari skrip npm atau dengan npx.
Ini adalah akhir dari proses instalasi nodemon
. Selanjutnya, kita akan menggunakan nodemon
dengan proyek kita.
nodemon
Kita dapat menggunakan nodemon
untuk memulai skrip Node. Misalnya, jika kita memiliki penyiapan server Express di dalam berkas server.js
, kita dapat memulainya dan memantau perubahan seperti ini:
- nodemon server.js
Anda dapat memasukkan argumen dengan cara yang sama seperti Anda menjalankan skrip dengan Node:
- nodemon server.js 3006
Setiap kali Anda membuat perubahan terhadap berkas dengan salah satu dari ekstensi asali yang dipantau (.js
, .mjs
, .json
, .coffee
, atau .litcoffee
) di dalam direktori atau subdirektori saat ini, proses akan memulai ulang.
Mari kita asumsikan kita menulis suatu berkas contoh server.js
yang mengeluarkan pesan: Dolphin app listening on port ${port}!
.
Kita dapat menjalankan contoh dengan nodemon
:
- nodemon server.js
Kita akan melihat keluaran terminal berikut:
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
Sementara nodemon
masih berjalan, mari kita buat perubahan terhadap berkas server.js
untuk mengeluarkan pesan: Shark app listening on port ${port}!
.
Kita akan melihat keluaran terminal tambahan berikut:
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
Keluaran terminal dari aplikasi Node.js akan menampilkan seperti yang diharapkan. Anda dapat memulai ulang prosesnya kapan pun dengan mengetik rs
dan menekan ENTER
.
Sebagai alternatif, nodemon
juga akan mencari berkas main
yang ditentukan dalam berkas package.json
dari proyek Anda:
{
// ...
"main": "server.js",
// ...
}
Atau, skrip start
:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
Setelah Anda membuat perubahan pada package.json
, Anda kemudian dapat memanggil nodemon
untuk memulai aplikasi contoh di mode pemantauan tanpa harus memasukkan server.js
.
Anda dapat memodifikasi pengaturan konfigurasi yang tersedia untuk nodemon
.
Mari kita bahas beberapa opsi utama:
--exec
: Pakai switch --exec
guna menentukan biner untuk mengeksekusi berkas. Misalnya, saat dikombinasikan dengan biner ts-node
, --exec
dapat menjadi berguna untuk memantau perubahan dan menjalankan berkas TypeScript.--ext
: Menentukan ekstensi berkas berbeda yang harus dipantau. Untuk switch ini, berikan daftar ekstensi berkas yang harus dipisahkan oleh koma (misalnya, --ext js,ts
).--delay
: Secara asali, nodemon
menunggu selama satu detik untuk memulai ulang proses saat berkas berubah, tetapi dengan switch --delay
, Anda dapat menentukan waktu tunggu yang berbeda. Misalnya, nodemon --delay 3.2
untuk penundaan selama 3,2 detik.--watch
: Gunakan switch --watch
untuk menentukan beberapa direktori atau berkas yang harus dipantau. Tambahkan satu switch --watch
untuk setiap direktori yang ingin Anda pantau. Secara asali, direktori dan subdirektori saat ini telah dipantau, jadi dengan --watch
Anda dapat mempersempitnya hanya ke subdirektori atau berkas tertentu.--ignore
: Gunakan switch --ignore
untuk mengabaikan berkas, pola berkas, atau direktori tertentu.--verbose
: Keluaran verbose yang lebih banyak dengan informasi tentang berkas-berkas yang berubah sehingga memicu pemulaian ulang.Anda dapat melihat semua opsi yang tersedia dengan perintah berikut:
- nodemon --help
Dengan menggunakan opsi ini, mari kita ciptakan perintah untuk memenuhi skenario berikut:
server
.ts
.test.ts
server/server.ts
) dengan ts-node
- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
Perintah ini mengombinasikan opsi --watch
, --ext
, --exec
, --ignore
, dan --delay
untuk memenuhi kondisi skenario kita.
Dalam contoh sebelumnya, menambahkan switch konfigurasi saat menjalankan nodemon
bisa menjadi cukup membosankan. Solusi yang lebih baik untuk proyek yang memerlukan konfigurasi spesifik adalah dengan menentukan konfigurasi ini dalam berkas nodemon.json
.
Misalnya, ini adalah konfigurasi yang sama seperti contoh baris perintah sebelumnya, tetapi ditempatkan di dalam berkas nodemon.json
:
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Perhatikan penggunaan execMap
alih-alih switch --exec
. execMap
memungkinkan Anda untuk menentukan biner yang harus digunakan untuk ekstensi berkas tertentu.
Sebagai alternatif, jika Anda lebih memilih untuk tidak menambahkan berkas konfigurasi nodemon.json
ke proyek, Anda dapat menambahkan konfigurasi ini ke berkas package.json
di bawah kunci nodemonConfig
:
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Setelah Anda membuat perubahan pada nodemon.json
atau package.json
, Anda kemudian dapat memulai nodemon
dengan skrip yang diinginkan:
- nodemon server/server.ts
nodemon
akan mengambil konfigurasinya dan menggunakannya. Dengan cara ini, konfigurasi Anda dapat disimpan, dibagikan, dan diulang untuk menghindari salin-tempel atau kesalahan tik di baris perintah.
Dalam artikel ini, Anda telah mendalami cara menggunakan nodemon
dengan aplikasi Node.js. Alat ini membantu mengotomatiskan proses memberhentikan dan memulai server Node untuk melihat perubahan.
Untuk informasi lebih lanjut tentang fitur dan pemecahan masalah yang tersedia, lihat dokumentasi resmi.
Jika Anda ingin mempelajari lebih lanjut tentang Node.js, lihat laman topik Node.js. kami untuk proyek latihan dan pemrograman.
]]>Middleware is a function that executes the lifecycle method to an Express server, and utilizes the request
and response
cycles. Express.js offers built-in middleware, and allows you to produce custom versions for precise functionality such as preventing a user from performing a certain operation or logging the path for an incoming request to your application.
In this article, you will learn about how to create a custom middleware in Express.js.
To follow along with this article, you will need:
request
and response
cycles. Check out our tutorials on How To Use the req
Object in Express and How To Use the res
Object in Express.All middleware functions in Express.js accept three arguments following the request
, response
, and next
lifecycle methods. In your index.js
file, define a function with the three lifecycle methods as arguments:
function myCustomMiddleware(req, res, next) {
// ...
}
The first argument, req
, is shorthand for the request
object with built-in properties to access data from the client side and facilitate HTTP requests. The res
argument is the response
object with built-in methods to send data to the client side through HTTP requests. The argument, next
, is a function that tells Express.js to continue on to the following middleware you have configured for your application.
Middleware has the ability to modify the req
and res
objects, run any code you wish, end the request
and response
cycle, and move onto the following functions.
Note the order of your middleware, as invoking the next()
function is required in each preceding middleware.
Now that you’ve reviewed the three arguments that build a middleware, let’s look at how to assemble a custom middleware.
req
ObjectTo identify the currently logged in user, you can construct a custom middleware that can fetch the user through authentication steps. In your setCurrentUser.js
file, define a function that accepts the three lifecycle methods as arguments:
// Require in logic from your authentication controller
const getUserFromToken = require("../getUserFromToken");
module.exports = function setCurrentUser(req, res, next) {
const token = req.header("authorization");
// look up the user based on the token
const user = getUserFromToken(token).then(user => {
// append the user object the the request object
req.user = user;
// call next middleware in the stack
next();
});
};
Within your setCurrentUser()
function, the req
object applies the built-in .header()
method to return the access token from a user. Using the authentication controller method, getUserFromToken()
, your req.header()
logic passes in as an argument to look up the user based on their token. You can also use the req
object to define a custom property, .user
to store the user’s information. Once your middleware is complete, export the file.
You can enable your custom middleware in your Express server by applying the built-in Express.js middleware, .use()
.
In your server.js
file, instantiate an instance of Express and require in your setCurrentUser()
custom middleware:
const express = require('express');
const setCurrentUser = require('./middleware/setCurrentUser.js');
const app = express();
app.use(setCurrentUser);
// ...
The app.use()
middleware accepts your custom middleware as an argument, and authorizes your logic in your Express server.
res
ObjectYou can also create a custom middleware to handle functionality on your response
object, such as designing a new header.
In your addNewHeader.js
file, define a function and utilize the .setHeader()
method on the res
object:
module.exports = function addNewHeader(req, res, next) {
res.setHeader("X-New-Policy", "Success");
next();
};
Here, the .setHeader()
method will apply the new header, Success
, on each function call. The next()
method will tell Express.js to move on to following middleware once the execution completes.
Request
and Response
CycleExpress.js also allows you to end the request
and response
cycle in your custom middleware. A common use case for a custom middleware is to validate a user’s data set on your req
object.
In your isLoggedIn.js
file, define a function and set a conditional to check if a user’s data exists on the req
object:
module.exports = function isLoggedIn(req, res, next) {
if (req.user) {
next();
} else {
// return unauthorized
res.send(401, "Unauthorized");
}
};
If the user’s data exists in the req
object, your custom middleware will move on to following functions. If a particular user’s data is not in the object, the .send()
method on the res
object will forward the error status code 401
and a message to the client side.
Once you’ve set your middleware, export the file and navigate to your Express.js server. In your server.js
file, require in and insert your custom middleware as an argument in a GET
request to authenticate a user through a single route:
const express = require("express");
const setCurrentUser = require("./middleware/setCurrentUser.js");
const isLoggedIn = require("./middleware/isLoggedIn.js");
const app = express();
app.use(setCurrentUser);
app.get("/users", isLoggedIn, function(req, res) {
// ...
});
The route /users
handles the logic within your isLoggedIn
custom middleware. Based on the order of your Express server, the route can also access the setCurrentUser
middleware as it is defined before your GET
request.
Express.js provides you the ability to customize your middleware outside of the built-in methods to parse through a user authentication and their data.
For further reading on writing custom Express.js middleware visit the official documentation on the Express.js site.
]]>Short for request
, the req
object is one half of the request
and response
cycle to examine calls from the client side, make HTTP requests, and handle incoming data whether in a string or JSON object.
In this article, you will learn about the req
object in Express.
To follow along with this article, you will need:
Express servers receive data from the client side through the req
object in three instances: the req.params
, req.query
, and req.body
objects.
The req.params
object captures data based on the parameter specified in the URL. In your index.js
file, implement a GET
request with a parameter of '/:userid'
:
// GET https://example.com/user/1
app.get('/:userid', (req, res) => {
console.log(req.params.userid) // "1"
})
The req.params
object tells Express to output the result of a user’s id through the parameter '/:userid'
. Here, the GET
request to https://example.com/user/1
with the designated parameter logs into the console an id of "1"
.
To access a URL query string, apply the req.query
object to search, filter, and sort through data. In your index.js
file, include a GET
request to the route '/search'
:
// GET https://example.com/search?keyword=great-white
app.get('/search', (req, res) => {
console.log(req.query.keyword) // "great-white"
})
Utilizing the req.query
object matches data loaded from the client side in a query conditional. In this case, the GET
request to the '/search'
route informs Express to match keywords in the search query to https://example.com
. The result from appending the .keyword
property to the req.query
object logs into the console, "great-white"
.
The req.body
object allows you to access data in a string or JSON object from the client side. You generally use the req.body
object to receive data through POST
and PUT
requests in the Express server.
In your index.js
file, set a POST
request to the route '/login'
:
// POST https://example.com/login
//
// {
// "email": "user@example.com",
// "password": "helloworld"
// }
app.post('/login', (req, res) => {
console.log(req.body.email) // "user@example.com"
console.log(req.body.password) // "helloworld"
})
When a user inputs their email and password on the client side, the req.body
object stores that information and sends it to your Express server. Logging the req.body
object into the console results in the user’s email and password.
Now that you’ve examined ways to implement the req
object, let’s look at other approaches to integrate into your Express server.
req
PropertiesProperties on the req
object can also return the parts of a URL based on the anatomy. This includes the protocol
, hostname
, path
, originalUrl
, and subdomains
.
In your index.js
file, set a GET
request to the '/creatures'
route:
// https://ocean.example.com/creatures?filter=sharks
app.get('/creatures', (req, res) => {
console.log(req.protocol) // "https"
console.log(req.hostname) // "example.com"
console.log(req.path) // "/creatures"
console.log(req.originalUrl) // "/creatures?filter=sharks"
console.log(req.subdomains) // "['ocean']"
})
You can access various parts of the URL using built-in properties such as .protocol
and .hostname
. Logging the req
object with each of the properties results in the anatomy of the URL.
req
PropertiesThe res
object consists of properties to maximize your calls to HTTP requests.
To access the HTTP method, whether a GET
, POST
, PUT
, or DELETE
, utilize the .method
property to your req
object. In your index.js
file, implement a DELETE
request to an anonymous endpoint:
app.delete('/', (req, res) => {
console.log(req.method) // "DELETE"
})
The .method
property returns the current HTTP request method. In this case, the console logs a DELETE
method.
For headers sent into your server, append the method .header()
to your req
object. Set a POST
request to the route '/login'
in your index.js
file:
app.post('/login', (req, res) => {
req.header('Content-Type') // "application/json"
req.header('user-agent') // "Mozilla/5.0 (Macintosh Intel Mac OS X 10_8_5) AppleWebKi..."
req.header('Authorization') // "Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..."
})
The req.header()
method will return the header type such as Content-Type
and Authorization
. The argument for req.header()
is case-insensitive so you can use req.header('Content-Type')
and req.header('content-type')
interchangeably.
If you’ve added cookie-parser
as a dependency in your Express server, the req.cookies
property will store values from your parser. In your index.js
file, set an instance of req.cookies
and apply the sessionDate
property:
// Cookie sessionDate=2019-05-28T01:49:11.968Z
req.cookies.sessionDate // "2019-05-28T01:49:11.968Z"
Notice the result returned from the a cookie’s session date when called from the req
object.
Express provides built-in properties to utilize the req
object as part of the request
cycle to handle HTTP requests and data from the client side. If you’d like to view the official documentation on req
please visit the official Express.js docs.
morgan is a Node.js and Express middleware to log HTTP requests and errors, and simplifies the process. In Node.js and Express, middleware is a function that has access to the request
and response
lifecycle methods, and the next()
method to continue logic in your Express server.
In this article you will explore how to implement morgan in your Express project.
To follow along with this article, you will need:
As Express.js is a Node.js framework, ensure you have the latest version of Node.js from Node.js before moving forward.
To include morgan in your Express project, you will need to install it as a dependency.
Create a new directory named express-morgan
for your project:
- mkdir express-morgan
Change into the new directory:
- cd express-morgan
Initialize a new Node project with defaults. This will include your package.json
file to access your dependencies:
- npm init -y
Install morgan as a dependency:
- npm install morgan --save
Create your entry file, index.js
. This is where you will handle logic in your Express server:
- touch index.js
Now that you’ve added morgan to your project, let’s include it in your Express server. In your index.js
file, instantiate an Express instance and require morgan
:
const express = require('express');
const morgan = require('morgan');
const app = express();
app.listen(3000, () => {
console.debug('App listening on :3000');
});
With your Express server now set up, let’s look at using morgan to add request logging.
To use morgan in your Express server, you can invoke an instance and pass as an argument in the .use()
middleware before your HTTP requests. morgan comes with a suite of presets, or predefined format strings, to create a new logger middleware with built-in format and options. The preset tiny
provides the minimal output when logging HTTP requests.
In your index.js
file, invoke the app.use()
Express middleware and pass morgan()
as an argument:
const app = express();
app.use(morgan('tiny'));
Including the preset tiny
as an argument to morgan()
will use its built-in method, identify the URL, declare a status, and the request’s response time in milliseconds.
Alternatively, morgan reads presets like tiny
in a format string defined below:
morgan(':method :url :status :res[content-length] - :response-time ms');
This tends to the same functionality contained in the tiny
preset in a format that morgan parses. Following the :
symbol are morgan functions called tokens. You can use the format string to define tokens create your own custom morgan middleware.
Tokens in morgan are functions identified following the :
symbol. morgan allows you to create your own tokens with the .token()
method.
The .token()
method accepts a type, or the name of the token as the first argument, following a callback function. morgan will run the callback function each time a log occurs using the token. As a middleware, morgan applies the req
and res
objects as arguments.
In your index.js
file, employ the .token()
method, and pass a type as the first argument following an anonymous function:
morgan.token('host', function(req, res) {
return req.hostname;
});
The anonymous callback function will return the hostname
on the req
object as a new token to use in an HTTP request in your Express server.
To denote custom arguments, you can use square brackets to define arguments passed to a token. This will allow your tokens to accept additional arguments. In your index.js
file apply a custom argument to the morgan format string in the :param
token:
app.use(morgan(':method :host :status :param[id] :res[content-length] - :response-time ms'));
morgan.token('param', function(req, res, param) {
return req.params[param];
});
The custom argument id
on the :param
token in the morgan invocation will include the ID in the parameter following the .token()
method.
morgan allows flexibility when logging HTTP requests and updates precise status and response time in custom format strings or in presets. For further reading, check out the morgan documentation.
]]>If you work on multiple Node.js projects, you’ve probably run into this one time or another. You have the latest and greatest version of Node.js installed, and the project you’re about to work on requires an older version. In those situations, the Node Version Manager (nvm) is a great tool to use, allowing you to install multiple versions of Node.js and switch between them as you see fit.
In this tutorial, you will install nvm
and learn to install, remove, and switch between different versions of Node.js.
To complete this tutorial, you will need the following:
To get started, you will need to install the Node Version Manager, or nvm
, on your system. You can install it manually by running the following command:
- curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
If you prefer wget
, you can run this command:
- wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
Once installed, close your terminal application for changes to take effect. You will also need to add a couple of lines to your bash shell startup file. This file might have the name .bashrc
, .bash_profile
, or .zshrc
depending on your operating system. To do this, reopen your terminal app and run the following commands:
- export NVM_DIR="$HOME/.nvm"
- [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
- [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
With nvm
installed, you can now install and work with multiple versions of Node.js.
Now that you have nvm
installed, you can install a few different versions of Node.js:
- nvm install 0.10
After running this command, this is the output that will display in your terminal app:
OutputDownloading and installing node v0.10.48...
Downloading https://nodejs.org/dist/v0.10.48/node-v0.10.48-darwin-x64.tar.xz...
######################################################################### 100.0%
Computing checksum with shasum -a 256
Checksums matched!
Now using node v0.10.48 (npm v2.15.1)
You can also install Node version 8 and version 12:
- nvm install 8
- nvm install 12
Upon running each command, nvm
will download the version of Node.js from the official website and install it. Once installed, it will also set the version you just installed as the active version.
If you were to run node --version
after each of the aforementioned commands, you’d see the most recent version of the respective major version.
nvm
isn’t limited to major versions either. You could also run nvm install 12.0.0
to explicitly install the specific 12.0.0 version of Node.js.
With a handful of different versions of Node.js installed, we can run nvm
with the ls
argument to list out everything we have installed:
- nvm ls
The output produced by running this command might look something like this:
Output v0.10.48
v4.9.1
v6.10.3
v6.14.4
v8.4.0
v8.10.0
v10.13.0
v10.15.0
v10.15.3
-> v12.0.0
v12.7.0
system
default -> v10.15 (-> v10.15.3)
node -> stable (-> v12.7.0) (default)
stable -> 12.7 (-> v12.7.0) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/dubnium (-> N/A)
lts/argon -> v4.9.1
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.16.0 (-> N/A)
lts/dubnium -> v10.16.0 (-> N/A)
Your output will probably differ depending on how many versions of Node.js you have installed on your machine.
The little ->
indicates the active version, and default ->
indicates the default version of Node.js. The default version of Node is the version that will be available when you open a new shell. system
corresponds with the version of Node.js installed outside of nvm
on your system.
You may want to change the version of Node.js that your machine defaults to. You can also use nvm
to accomplish this.
Even with juggling multiple versions, there’s a good chance you have one version that you would prefer to run the majority of the time. Often times, that would be the latest stable version of Node.js. During the time of the release of this tutorial, the latest stable version of Node.js is version 15.1.0.
To set the latest stable version as your default, run:
- nvm alias default stable
After running this command, this will be the output you see:
Outputdefault -> stable (-> v15.1.0)
You may also have a specific version number you would like to set as your default. To alias default to a specific version, run:
- nvm alias default 10.15
Outputdefault -> 10.15 (-> v10.15.3)
Now every time you open a new shell, that version of Node.js will be immediately available. Some work you do may require different versions of Node.js. This is something nvm
can help you with as well.
To switch to a different version of Node.js, use the nvm
command use
followed by the version of Node.js you would like to use:
- nvm use 0.10
This is the output you will see:
OutputNow using node v0.10.48 (npm v2.15.1)
You can even switch back to your default version:
- nvm use default
At this point, you have installed several versions of Node.js. You can use nvm
to uninstall any unwanted version of Node.js you may have.
You may have several versions of Node.js installed due to working on a variety of projects on your machine.
Fortunately, you can remove Node.js versions just as easily as you installed them:
- nvm uninstall 0.10
This is the output that will display after running this command:
OutputUninstalled node v0.10.48
Unfortunately, when you specify a major or minor version, nvm
will only uninstall
the latest installed version that matches the version number.
So, if you have two different versions of Node.js version 6 installed, you have to run the uninstall
command for each version:
Output$ nvm uninstall 6
Uninstalled node v6.14.4
$ nvm uninstall 6
Uninstalled node v6.10.3
It’s worth noting that you can’t remove a version of Node.js that is currently in use and active.
You may want to return to your system’s default settings and stop using nvm. The next step will explain how to do this.
If you would like to completely remove nvm
from your machine, you can use the unload
command:
- nvm unload
If you would still like to keep nvm
on your machine, but you want to return to your system’s installed version of Node.js, you can make the switch by running this command:
- nvm use system
Now your machine will return to the installed version of Node.js.
Working on multiple projects that use different versions of Node.js doesn’t have to be a nightmare. Node Version Manager makes the process seamless. If you would like to avoid having to remember to switch versions, you can take things a step further by creating a .nvmrc
file in your project’s root:
- $ echo "12" > .nvmrc
As a next step, you can learn to create your very own Node.js program with this How To Write and Run Your First Program in Node.js tutorial.
]]>Download the Complete eBook!
How To Code in Node.js eBook in EPUB format
How To Code in Node.js eBook in PDF format
Node.js is a popular open-source runtime environment that can execute JavaScript outside of the browser. The Node runtime is commonly used for back-end web development, leveraging its asynchronous capabilities to create networking applications and web servers. Node is also a popular choice for building command line tools.
In this book, you will go through exercises to learn the basics of how to code in Node.js, gaining skills that apply equally to back-end and full stack development in the process.
By the end of this book you will be able to write programs that leverage Node’s asynchronous code execution capabilities, complete with event emitters and listeners that will respond to user actions. Along the way you will learn how to debug Node applications using the built-in debugging utilities, as well as the Chrome browser’s DevTools utilities. You will also learn how to write automated tests for your programs to ensure that any features that you add or change function as you expect.
You can download the eBook in either the EPUB or PDF format by following the links below.
Download the Complete eBook!
After you’re finished this book, if you’d like to learn more about how to build tools and applications with Node.js, visit the DigitalOcean Community’s [Node.js section] (https://www.digitalocean.com/community/tags/node-js).
]]>Node adalah lingkungan run-time yang memungkinkan penulisan JavaScript di pihak server. Ini telah banyak digunakan sejak dirilis tahun 2011. Menulis JavaScript di pihak server dapat menjadi hal yang menantang seiring berkembangnya basis kode karena sifat bahasa JavaScript; dinamis dan bertipe lemah.
Para pengembang yang memilih JavaScript dari bahasa lain sering mengeluhkan kekurangan tipe statis kuat, di sinilah TypeScript hadir menjembatani kesenjangan itu.
TypeScript adalah super-set bertipe (opsional) dari JavaScript yang dapat membantu membangun dan mengelola proyek JavaScript berskala besar. Ini dapat dianggap sebagai JavaScript dengan fitur tambahan, seperti tipe statis kuat, kompilasi, dan pemrograman berorientasi objek.
Catatan: TypeScript secara teknis adalah super-set JavaScript, yang berarti semua kode JavaScript adalah kode TypeScript yang valid.
Berikut ini beberapa manfaat menggunakan TypeScript:
Dalam tutorial ini, Anda akan menyiapkan proyek Node dengan TypeScript. Anda akan membangun aplikasi Express menggunakan TypeScript dan menerjemahkannya menjadi kode JavaScript yang rapi dan andal.
Sebelum memulai panduan ini, Node.js sudah harus terinstal di mesin Anda. Anda dapat melakukannya dengan mengikuti panduan Cara Menginstal Node.js dan Membuat Lingkungan Pengembangan Lokal bagi sistem operasi Anda.
Untuk memulai, buat folder baru bernama node_project
dan pindah ke direktori itu.
- mkdir node_project
- cd node_project
Selanjutnya, inisialisasi sebagai proyek npm:
- npm init
Setelah menjalankan init npm
, Anda perlu memberikan informasi tentang proyek Anda kepada npm. Jika Anda lebih suka membiarkan npm mengasumsikan asali yang wajar, Anda dapat menambahkan bendera y
untuk melewatkan prompt informasi tambahan:
- npm init -y
Karena kini ruang proyek sudah disiapkan, Anda siap untuk melanjutkan ke instalasi dependensi yang diperlukan.
Dengan proyek npm kosong yang telah diinisialisasi, langkah selanjutnya adalah menginstal dependensi yang diperlukan untuk menjalankan TypeScript.
Jalankan perintah berikut dari direktori proyek Anda untuk menginstal dependensi:
- npm install -D typescript@3.3.3
- npm install -D tslint@5.12.1
Bendera -D
adalah pintasan untuk: --save-dev
. Anda dapat mempelajari lebih lanjut tentang bendera ini dalam dokumentasi npmjs.
Sekarang, saatnya menginstal kerangka kerja Express:
- npm install -S express@4.16.4
- npm install -D @types/express@4.16.1
Perintah kedua menginstal beberapa tipe Express untuk dukungan TypeScript. Tipe dalam TypeScript adalah berkas, biasanya dengan ekstensi .d.ts
. Berkas digunakan untuk memberikan informasi tentang API kepada tipe, dalam hal ini adalah kerangka kerja Express.
Paket ini diperlukan karena TypeScript dan Express adalah paket independen. Tanpa paket @types/express
, TypeScript tidak dapat mengetahui tipe kelas Express.
Di bagian ini, Anda akan menyiapkan TypeScript dan mengonfigurasi proses lint untuk TypeScript. TypeScript menggunakan berkas bernama tsconfig.json
untuk mengonfigurasi opsi pengompilasi proyek. Buat berkas tsconfig.json
di root direktori proyek dan tempelkan dalam cuplikan kode berikut:
{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist"
},
"lib": ["es2015"]
}
Mari kita tinjau beberapa hal pokok dalam cuplikan JSON di atas:
module
: Menentukan metode pembuatan kode modul. Node menggunakan commonjs
.target
: Menentukan tingkat bahasa keluaran.moduleResolution
: Ini membantu pengompilasi mengetahui apa yang dirujuk oleh impor. Nilai node
menirukan mekanisme resolusi modul Node.outDir
: Ini adalah lokasi untuk keluaran berkas .js
setelah penerjemahan. Dalam tutorial ini, Anda akan menyimpannya sebagai dist
.Alternatif untuk membuat dan mengisi berkas tsconfig.json
adalah dengan menjalankan perintah berikut:
- tsc --init
Perintah ini akan menghasilkan berkas tsconfig.json
yang telah diberi komentar dengan baik.
Untuk mempelajari lebih lanjut berbagai opsi nilai kunci yang tersedia, dokumentasi TypeScript resmi memberikan penjelasan tentang setiap opsi.
Sekarang Anda dapat mengonfigurasi proses lint TypeScript untuk proyek. Dalam terminal yang sedang berjalan di root direktori proyek Anda, tempat tutorial ini dibuat sebagai node_project
, jalankan perintah berikut untuk menghasilkan berkas tslint.json
:
- ./node_modules/.bin/tslint --init
Buka berkas tslint.json
yang baru dihasilkan dan tambahkan aturan no-console
yang sesuai:
{
"defaultSeverity": "error",
"extends": ["tslint:recommended"],
"jsRules": {},
"rules": {
"no-console": false
},
"rulesDirectory": []
}
Secara asali, linter di TypeScript mencegah penggunaan awakutu dengan pernyataan console
, sehingga perlu secara eksplisit memberi tahu linter untuk mencabut aturan no-console
asali.
package.json
Pada titik ini dalam tutorial, Anda dapat menjalankan fungsi dalam terminal satu per satu, atau membuat npm script untuk menjalankannya.
Dalam langkah ini, Anda akan membuat skrip start
yang akan mengompilasi dan menerjemahkan kode TypeScript, kemudian menjalankan app.js
yang dihasilkan.
Buka berkas package.json
dan perbarui sebagaimana mestinya:
{
"name": "node-with-ts",
"version": "1.0.0",
"description": "",
"main": "dist/app.js",
"scripts": {
"start": "tsc && node dist/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@types/express": "^4.16.1",
"tslint": "^5.12.1",
"typescript": "^3.3.3"
},
"dependencies": {
"express": "^4.16.4"
}
}
Dalam cuplikan di atas, Anda memperbarui jalur main
dan menambahkan perintah start
ke bagian skrip. Bila memperhatikan perintah start
, Anda akan melihat bahwa perintah tsc
pertama adalah berjalan, kemudian perintah node
. Ini akan mengompilasi kemudian menjalankan keluaran yang dihasilkan dengan node
.
Perintah tsc
memberi tahu TypeScript untuk mengompilasi aplikasi dan menempatkan keluaran .js
yang dihasilkan dalam direktori outDir
yang ditentukan karena telah diatur dalam berkas tsconfig.json
.
Karena TypeScript dan linter telah dikonfigurasi, saatnya membangun Server Express Node.
Pertama-tama, buat folder src
di root direktori proyek Anda:
- mkdir src
Kemudian, buat berkas bernama app.ts
di dalamnya:
- touch src/app.ts
Sehingga struktur folder akan terlihat seperti ini:
├── node_modules/
├── src/
├── app.ts
├── package-lock.json
├── package.json
├── tsconfig.json
├── tslint.json
Buka berkas app.ts
dengan editor teks pilihan Anda dan tempelkan dalam cuplikan kode berikut:
import express from 'express';
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
Kode di atas membuat Server Node yang mendengarkan permintaan di porta 3000
. Jalankan aplikasi menggunakan perintah berikut:
- npm start
Jika berhasil dijalankan, pesan akan dicatat ke terminal:
- Outputserver is listening on 3000
Sekarang, Anda dapat mengunjungi http://localhost:3000
di peramban Anda dan seharusnya melihat pesan:
- OutputThe sedulous hyena ate the antelope!
Buka berkas dist/app.js
dan Anda akan menemukan versi kode TypeScript yang telah diterjemahkan:
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const app = express_1.default();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
//# sourceMappingURL=app.js.map
Saat ini, Anda berhasil menyiapkan proyek Node menggunakan TypeScript.
Dalam tutorial ini, Anda telah mempelajari alasan TypeScript berguna untuk penulisan kode JavaScript yang andal. Anda juga telah mempelajari beberapa manfaat menggunakan TypeScript.
Terakhir, Anda telah menyiapkan proyek Node dengan kerangka kerja Express, kemudian mengompilasi dan menjalankan proyek dengan TypeScript.
]]>Saat membuat aplikasi Node secara cepat, kadang-kadang diperlukan cara mudah dan cepat untuk membuat templat aplikasi kita.
Jade hadir sebagai mesin tampilan untuk Express secara asali, tetapi sintaks Jade dapat menjadi terlalu rumit untuk banyak kasus penggunaan. EJS adalah satu alternatif yang melakukan pekerjaan itu dengan baik dan sangat mudah disiapkan. Mari kita lihat cara membuat aplikasi sederhana dan menggunakan EJS untuk memasukkan bagian situs kita yang dapat diulang (potongan) dan menyalurkan data ke tampilan.
Kita akan membuat dua halaman untuk aplikasi, satu halaman dengan lebar penuh dan yang satu lagi dengan bilah samping.
Inilah berkas yang akan kita perlukan untuk aplikasi kita. Kita akan membuat templat dari dalam folder tampilan dan selebihnya adalah praktik Node yang cukup standar.
- views
----- partials
---------- footer.ejs
---------- head.ejs
---------- header.ejs
----- pages
---------- index.ejs
---------- about.ejs
- package.json
- server.js
package.json
akan menyimpan informasi aplikasi Node dan dependensi yang kita perlukan (express dan EJS). server.js
akan menyimpan konfigurasi dan penyiapan server Express. Kita akan mendefinisikan rute ke halaman kita di sini.
Mari kita masuk ke berkas package.json
dan menyiapkan proyek kita di sana.
{
"name": "node-ejs",
"main": "server.js",
"dependencies": {
"ejs": "^3.1.5",
"express": "^4.17.1"
}
}
Kita cuma perlu Express dan EJS. Sekarang kita harus menginstal dependensi yang baru saja kita definisikan. Lanjutkan dan jalankan:
- npm install
Dengan semua dependensi yang telah diinstal, mari kita konfigurasi aplikasi untuk menggunakan EJS dan menyiapkan rute untuk dua halaman yang kita perlukan: halaman indeks (lebar penuh) dan halaman about (bilah samping). Kita akan melakukannya semua dalam berkas server.js
.
// load the things we need
var express = require('express');
var app = express();
// set the view engine to ejs
app.set('view engine', 'ejs');
// use res.render to load up an ejs view file
// index page
app.get('/', function(req, res) {
res.render('pages/index');
});
// about page
app.get('/about', function(req, res) {
res.render('pages/about');
});
app.listen(8080);
console.log('8080 is the magic port');
Di sini kita mendefinisikan aplikasi dan mengaturnya untuk ditampilkan di porta 8080. Kita juga harus menjadikan EJS sebagai mesin tampilan untuk aplikasi Express menggunakan app.set('view engine','ejs;
. Perhatikan cara kita mengirim tampilan kepada pengguna dengan res.render()
. Perlu diperhatikan bahwa res.render() akan mencari tampilan di folder tampilan. Jadi kita hanya perlu mendefinisikan pages/index
karena jalur lengkapnya adalah views/pages/index
.
Lanjutkan dan mulai server menggunakan:
- node server.js
Sekarang kita dapat melihat aplikasi dalam browser di http://localhost:8080
dan http://localhost:8080/about
. Aplikasi kita sudah siap dan kita harus mendefinisikan berkas tampilan dan melihat cara kerja EJS di sana.
Seperti banyak aplikasi yang kita buat, akan ada banyak kode yang digunakan kembali. Kita akan memanggil potongan itu dan mendefinisikan tiga berkas yang akan kita gunakan di seluruh bagian situs kita: head.ejs
, header.ejs
, dan footer.ejs
. Mari kita buat berkas-berkas itu sekarang.
<meta charset="UTF-8">
<title>EJS Is Fun</title>
<!-- CSS (load bootstrap from a CDN) -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.2/css/bootstrap.min.css">
<style>
body { padding-top:50px; }
</style>
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<a class="navbar-brand" href="/">EJS Is Fun</a>
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="nav-link" href="/">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/about">About</a>
</li>
</ul>
</nav>
<p class="text-center text-muted">© Copyright 2020 The Awesome People</p>
Sekarang potongan kita telah didefinisikan. Kita tinggal memasukkannya dalam tampilan. Mari kita masuk ke dalam index.ejs
dan about.ejs
serta menggunakan sintaks include
untuk menambahkan potongan tersebut.
Gunakan <%- include('RELATIVE/PATH/TO/FILE') %>
untuk menyematkan potongan EJS di berkas lain.
<%-
daripada cuma <%
untuk memberi tahu EJS agar merender HTML mentah.<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
Sekarang kita dapat melihat tampilan yang didefinisikan dalam browser di http://localhost:8080
.
Untuk halaman about, kita juga menambahkan bilah samping bootstrap guna memeragakan cara menstrukturkan potongan untuk digunakan kembali di seluruh bagian templat dan halaman yang berbeda.
<!DOCTYPE html>
<html lang="en">
<head>
<%- include('../partials/head'); %>
</head>
<body class="container">
<header>
<%- include('../partials/header'); %>
</header>
<main>
<div class="row">
<div class="col-sm-8">
<div class="jumbotron">
<h1>This is great</h1>
<p>Welcome to templating using EJS</p>
</div>
</div>
<div class="col-sm-4">
<div class="well">
<h3>Look I'm A Sidebar!</h3>
</div>
</div>
</div>
</main>
<footer>
<%- include('../partials/footer'); %>
</footer>
</body>
</html>
Jika kita mengunjungi http://localhost:8080/about
, kita dapat melihat halaman about dengan bilah samping!
Sekarang kita dapat mulai menggunakan EJS untuk menyalurkan data dari aplikasi Node ke tampilan kita.
Mari kita definisikan beberapa variabel dasar dan daftar yang harus disalurkan ke halaman beranda kita. Kembalilah ke berkas server.js
dan tambahkan yang berikut ini di dalam rute app.get('/')
.
// index page
app.get('/', function(req, res) {
var mascots = [
{ name: 'Sammy', organization: "DigitalOcean", birth_year: 2012},
{ name: 'Tux', organization: "Linux", birth_year: 1996},
{ name: 'Moby Dock', organization: "Docker", birth_year: 2013}
];
var tagline = "No programming concept is complete without a cute animal mascot.";
res.render('pages/index', {
mascots: mascots,
tagline: tagline
});
});
Kita telah membuat daftar bernama mascots
dan string sederhana bernama tagline
. Mari kita masuk ke berkas index.ejs
dan menggunakannya.
Untuk menggemakan variabel tunggal, kita cukup menggunakan <%= tagline %>
. Mari kita tambahkan ini ke berkas index.ejs:
...
<h2>Variable</h2>
<p><%= tagline %></p>
...
Untuk mengulang-ulang data, kita akan menggunakan .forEach
. Mari kita tambahkan ini ke berkas tampilan:
...
<ul>
<% mascots.forEach(function(mascot) { %>
<li>
<strong><%= mascot.name %></strong>
representing <%= mascot.organization %>, born <%= mascot.birth_year %>
</li>
<% }); %>
</ul>
...
Sekarang kita dapat melihat informasi baru yang telah ditambahkan di browser!
Potongan EJS memiliki akses ke semua data yang sama seperti tampilan induk. Namun, berhati-hatilah: Jika Anda merujuk variabel di potongan, perlu didefinisikan di setiap tampilan yang menggunakan potongan atau akan dihasilkan kesalahan.
Anda juga dapat mendefinisikan dan menyalurkan variabel ke potongan EJS dalam sintaks include seperti ini:
...
<header>
<%- include('../partials/header', {variant:'compact'}); %>
</header>
...
Namun, sekali lagi, Anda perlu berhati-hati ketika berasumsi variabel telah didefinisikan.
Jika Anda ingin merujuk variabel di potongan yang mungkin tidak selalu didefinisikan, dan memberinya nilai asali, Anda dapat melakukannya seperti ini:
...
<em>Variant: <%= typeof variant != 'undefined' ? variant : 'default' %></em>
...
Dalam baris di atas, kode EJS merender nilai variant
jika telah didefinisikan, dan default
jika tidak.
EJS memungkinkan kita untuk menjalankan aplikasi secara cepat saat kita tidak membutuhkan sesuatu yang terlalu rumit. Dengan menggunakan potongan dan memiliki kemampuan menyalurkan variabel ke tampilan, kita dapat membuat aplikasi yang bagus dengan cepat.
Untuk referensi lainnya tentang EJS, lihat dokumentasi resmi di sini.
]]>В Node.js для вступления изменений в силу необходимо перезапустить процесс. Это добавляет в рабочий процесс дополнительный шаг, необходимый для внесения изменений. Вы можете устранить этот дополнительный шаг, используя nodemon
для автоматического перезапуска процесса.
nodemon
— это утилита командной строки, разработанная @rem. Она заключает в оболочку ваше приложение Node, наблюдает за файловой системой и автоматически перезапускает процесс.
Из этой статьи вы узнаете об установке и настройке nodemon
.
Если вы захотите следовать за этой статьей, вам потребуется следующее:
nodemon
Вначале вам нужно будет установить nodemon
на вашем компьютере. Установите утилиту в проекте глобально или локально, используя npm или Yarn:
Вы можете установить nodemon
глобально с помощью npm
:
- npm install nodemon -g
Или с помощью Yarn:
- yarn global add nodemon
Также вы можете установить nodemon
локально с помощью npm. При локальной установке мы можем установить nodemon
как зависимость dev с помощью --save-dev
(или --dev
):
- npm install nodemon --save-dev
Или с помощью Yarn:
- yarn add nodemon --dev
При локальной установке нужно знать, сможете ли вы использовать команду nodemon
напрямую из командной строки:
- Outputcommand not found: nodemon
Однако вы также можете использовать его как часть некоторых скриптов npm или с npx.
На этом процесс установки nodemon
завершен. Далее мы будем использовать nodemon
с нашими проектами.
nodemon
Мы можем использовать nodemon
для запуска скрипта Node. Например, если у нас имеется настройка сервера Express в файле server.js
, мы можем запустить его и наблюдать за изменениями следующим образом:
- nodemon server.js
Вы можете передавать аргументы точно так же, как если бы запускали скрипт с помощью Node:
- nodemon server.js 3006
Процесс перезапускается каждый раз, когда вы вносите изменение в файл с одним из отслеживаемых по умолчанию расширений (.js
, .mjs
, .json
, .coffee
или .litcoffee
) в текущем каталоге или подкаталоге.
Допустим мы записываем образец файла server.js
, который выводит сообщение: Dolphin app listening on port ${port}!
.
Мы можем запустить пример с помощью nodemon
:
- nodemon server.js
Мы видим следующий вывод на терминале:
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
Пока nodemon
еще работает, внесем изменение в файл server.js
для вывода сообщения: Shark app listening on port ${port}!
.
Мы увидим следующий дополнительный вывод на терминале:
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
Вывод приложения Node.js на терминале отображается, как и ожидалось. Вы можете перезапустить процесс в любое время, набрав rs
и нажав ENTER
.
Также nodemon
будет искать файл main
, заданный в файле package.json
вашего проекта:
{
// ...
"main": "server.js",
// ...
}
Или скрипт start
:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
После внесения изменений в package.json
вы сможете вызывать nodemon
для запуска образца приложения в режиме наблюдения без его передачи в server.js
.
Вы можете изменить параметры конфигурации, доступные nodemon
.
Рассмотрим несколько основных опций:
--exec
: используйте оператор --exec
, чтобы задать двоичный код для выполнения файла. Например, в сочетании с двоичным кодом ts-node
оператор --exec
может быть полезен для наблюдения за изменениями и запуска файлов TypeScript.--ext
: задает различные расширения файлов для наблюдения. Для этого оператора требуется указать разделенный запятыми список расширений файлов (например, --ext js,ts
).--delay
: по умолчанию nodemon
ожидает одну секунду для перезапуска процесса после изменения файла, однако с помощью оператора --delay
вы можете указать другое время задержки. Например, nodemon --delay 3.2
для задержки 3,2 секунды.--watch
: используйте оператор --watch
, чтобы задать несколько каталогов или файлов для наблюдения. Добавляйте один оператор --watch
для каждого каталога, за которым вы хотите наблюдать. По умолчанию вы наблюдаете за текущим каталогом и его подкаталогами, а с помощью --watch
вы можете сузить область наблюдения до нескольких отдельных подкаталогов или файлов.--ignore
: используйте оператор --ignore
, чтобы игнорировать определенные файлы, шаблоны файлов или каталоги.--verbose
: более развернутый вывод с информацией о том, какие файлы изменились, для активации перезапуска.Вы можете просмотреть все доступные опции с помощью следующей команды:
- nodemon --help
Используя эти опции, создадим команду для соответствия следующему сценарию:
server
.ts
.test.ts
server/server.ts
) с ts-node
- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
Эта команда комбинирует опции --watch
, --ext
, --exec
, --ignore
и --delay
, чтобы выполнить условия для нашего сценария.
В предыдущем примере добавление параметров конфигурации при выполнении nodemon
может оказаться довольно затруднительным. Лучшее решение для проектов, требующих определенных конфигураций, — задать эти конфигурации в файле nodemon.json
.
Например, здесь приведены те же конфигурации, что и в предыдущем примере командной строки, но они содержатся в файле nodemon.json
:
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Обратите внимание, что execMap
используется вместо оператора --exec
. execMap
позволяет задавать двоичный код, который следует использовать для определенных расширений файлов.
Если вы предпочитаете не добавлять в проект файл конфигурации nodemon.json
, вы можете добавить эти конфигурации в файл package.json
в ключ nodemonConfig
:
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Когда вы внесете изменения в nodemon.json
или package.json
, вы сможете запускать nodemon
с помощью желаемого скрипта:
- nodemon server/server.ts
nodemon
подбирает конфигурации и использует их. Это позволяет сохранять конфигурации, делиться ими и воспроизводить их, чтобы избежать ошибок при копировании и вставке или ошибок ввода в командной строке.
В этой статье мы рассмотрели использование nodemon
с приложениями Node.js. Этот инструмент поможет автоматизировать процесс остановки и запуска сервера Node для просмотра изменений.
Дополнительную информацию о доступных характеристиках и ошибках при диагностике и устранении неисправностей можно найти в официальной документации.
Если вы хотите узнать больше о Node.js, на странице темы Node.js вы найдете упражнения и проекты программирования.
]]>No Node.js, é necessário reiniciar o processo para fazer com que as alterações sejam ativadas. Isso adiciona um passo extra ao seu fluxo de trabalho para que as alterações sejam realizadas. É possível eliminar esse passo extra usando o nodemon
para reiniciar o processo automaticamente.
O nodemon
é um utilitário de interface de linha de comando (CLI) desenvolvido pelo @rem que encapsula seu aplicativo Node, monitora o sistema de arquivos e reinicia o processo automaticamente.
Neste artigo, você irá aprender sobre a instalação e configuração do nodemon
.
Se quiser acompanhar os passos deste artigo, será necessário:
nodemon
Primeiro, você precisará instalar o nodemon
em sua máquina. Instale o utilitário globalmente ou localmente em seu projeto usando o npm ou o Yarn:
Instale o nodemon
globalmente com o npm
:
- npm install nodemon -g
Ou com o Yarn:
- yarn global add nodemon
Instale o nodemon
localmente com o npm. Ao executar uma instalação local, podemos instalar o nodemon
como uma dependência de desenvolvimento com --save-dev
(ou --dev
):
- npm install nodemon --save-dev
Ou com o Yarn:
- yarn add nodemon --dev
Em relação à instalação local, fique ciente de que não será possível usar o comando nodemon
diretamente da linha de comando:
- Outputcommand not found: nodemon
No entanto, você pode usá-lo como parte de alguns scripts do npm ou com o npx.
Isso conclui o processo de instalação do nodemon
. Em seguida, vamos usar o nodemon
com os nossos projetos.
nodemon
Podemos usar o nodemon
para iniciar um script do Node. Por exemplo, se tivermos uma configuração do servidor Express em um arquivo server.js
, podemos iniciá-la e monitorar alterações desta forma:
- nodemon server.js
Você pode passar os argumentos da mesma forma que faria se estivesse executando o script com o Node:
- nodemon server.js 3006
Cada vez que você faz uma alteração em um arquivo com uma das extensões monitoradas padrão (.js
, .mjs
, .json
, .coffee
ou .litcoffee
) no diretório atual ou em um subdiretório, o processo será reiniciado.
Vamos supor que escrevemos um arquivo server.js
de exemplo que entrega a mensagem: Dolphin app listening on port ${port}!
.
Podemos executar o exemplo com o nodemon
:
- nodemon server.js
Vemos o seguinte resultado no terminal:
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
Embora o nodemon
ainda esteja sendo executado, vamos alterar o arquivo server.js
para exibir a mensagem: Shark app listening on port ${port}!
.
Vemos o seguinte resultado adicional no terminal:
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
O resultado do terminal do nosso aplicativo Node.js está sendo exibido como esperado. Reinicie o processo a qualquer momento digitando rs
e apertando ENTER
.
De maneira alternativa, o nodemon
também irá procurar um arquivo main
especificado no arquivo package.json
do seu projeto:
{
// ...
"main": "server.js",
// ...
}
Ou, um script start
:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
Depois de fazer as alterações no package.json
, chame o nodemon
para iniciar o aplicativo de exemplo no modo de monitoramento sem precisar passar o server.js
.
É possível modificar as configurações disponíveis no nodemon
.
Vamos aprender um pouco sobre as opções principais:
--exec
: use a opção --exec
para especificar um binário com qual será executado o arquivo. Por exemplo, quando combinado com o binário ts-node
, o --exec
pode tornar-se útil para monitorar alterações e executar os arquivos do TypeScript.--ext
: especifique as diferentes extensões de arquivo a serem monitoradas. Para essa opção, forneça uma lista separada por vírgulas de extensões de arquivos (por exemplo, --ext js,ts
).--delay
: por padrão, o nodemon
espera um segundo para reiniciar o processo quando um arquivo é alterado, mas com a opção --delay
é possível especificar um atraso diferente. Por exemplo, nodemon --delay 3.2
para um atraso de 3.2 segundos.--watch
: use a opção --watch
para especificar vários diretórios ou arquivos a serem monitorados. Adicione uma opção --watch
para cada diretório que deseja monitorar. Por padrão, o diretório atual e seus subdiretórios são observados. Dessa forma, utilize o --watch
para arquivos ou subdiretórios específicos.--ignore
: use a opção --ignore
para ignorar certos arquivos, padrões de arquivos ou diretórios.--verbose
: um resultado mais detalhado com informações sobre os arquivos alterados para disparar um reinício.Visualize todas as opções disponíveis com o seguinte comando:
- nodemon --help
Usando essas opções, vamos criar o comando para satisfazer o seguinte cenário:
server
.ts
.test.ts
server/server.ts
) com o ts-node
- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
Esse comando combina as opções --watch
, --ext
, --exec
, --ignore
e --delay
para satisfazer as condições para o nosso cenário.
No exemplo anterior, adicionar as opções de configuração ao executar o nodemon
pode ser um processo bastante enfadonho. Uma solução melhor para projetos que precisam de configurações específicas é definir essas configurações em um arquivo nodemon.json
.
Por exemplo, aqui estão as mesmas configurações do exemplo de linha de comando anterior, mas colocadas em um arquivo nodemon.json
:
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Observe o uso do execMap
ao invés da opção --exec
. O execMap
permite que você especifique os binários que devem ser usados para determinadas extensões de arquivo.
De maneira alternativa, se preferir não adicionar um arquivo de configuração nodemon.json
ao seu projeto, adicione essas configurações ao arquivo package.json
sob uma chave nodemonConfig
:
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Depois de fazer as alterações no nodemon.json
ou no package.json
, inicie então o nodemon
com o script desejado:
- nodemon server/server.ts
O nodemon
irá captar as configurações e usá-las. Dessa forma, suas configurações podem ser salvadas, compartilhadas e repetidas para evitar erros de copiar e colar ou de digitação na linha de comando.
Neste artigo, você explorou como usar o nodemon
com seus aplicativos Node.js. Essa ferramenta ajuda a automatizar o processo de interromper e iniciar um servidor Node para visualizar as alterações.
Para mais informações sobre os recursos disponíveis e correção de problemas, consulte a documentação oficial.
Se quiser aprender mais sobre o Node.js, confira nossa página de tópico do Node.js para exercícios e projetos de programação.
]]>Sous Node.js, vous devez redémarrer le processus pour que les changements prennent effet, une étape supplémentaire qui s’ajoute à votre flux de travail pour que les modifications soient implémentées. En utilisant nodemon
, vous pouvez éliminer cette étape supplémentaire car il se chargera de redémarrer le processus automatiquement.
nodemon
est un utilitaire d’interface de ligne de commande (CLI) développé par @rem. Il enveloppe votre application Node, surveille le système de fichiers et redémarre automatiquement le processus.
Cet article vous permettra d’en apprendre davantage sur l’installation, le réglage et la configuration de nodemon
.
Pour suivre les étapes de cet article, vous aurez besoin de ce qui suit :
nodemon
En premier lieu, vous devez installer nodemon
sur votre machine. Installez l’utilitaire soit globalement ou localement sur votre projet en utilisant npm ou Yarn :
Vous pouvez installer nodemon
de manière globale avec npm
:
- npm install nodemon -g
Ou avec Yarn :
- yarn global add nodemon
Vous pouvez également utiliser npm pour installer nodemon
localement. Pour procéder à une installation locale, nous pouvons installer nodemon
sous la forme d’une dev dependency avec --save-dev
(ou --dev
) :
- npm install nodemon --save-dev
Ou avec Yarn :
- yarn add nodemon --dev
Il y a cependant une chose à retenir concernant une installation locale. Il vous sera impossible d’utiliser la commande nodemon
directement à partir de la ligne de commande suivante :
- Outputcommand not found: nodemon
Vous pouvez cependant l’utiliser dans le cadre de certains scripts npm ou avec npx.
Vous venez de terminer le processus d’installation de nodemon
. Maintenant, nous allons utiliser nodemon
avec nos projets.
nodemon
Nous pouvons utiliser nodemon
pour lancer un script Node. Si, par exemple, la configuration du serveur Express se trouve dans un fichier server.js
, nous pouvons la lancer et surveiller les modifications de la manière suivante :
- nodemon server.js
Vous pouvez transmettre les arguments de la même manière que pour exécuter le script avec Node :
- nodemon server.js 3006
Chaque fois que vous modifiez un fichier avec l’une des extensions surveillées par défaut (.js
, .mjs
, .json
, .coffee
ou .litcoffee
) dans le répertoire ou le sous-répertoire actuel, le processus redémarrera.
Supposons que nous écrivions un exemple de fichier server.js
qui déclenche le message suivant : Écoute de l'application Dolphin sur le port ${port}
!.
Nous pouvons exécuter l’exemple avec nodemon
:
- nodemon server.js
Nous obtiendrons le résultat suivant sur le terminal:
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
Alors que l’exécution de nodemon
est toujours cours, modifions le fichier server.js
de manière à qu’il fasse apparaître le message suivant :Écoute de l'application Shark sur le port ${port} !
Nous obtiendrons le résultat supplémentaire suivant sur le terminal :
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
Le résultat qui apparaît sur le terminal de notre application Node.js s’affiche comme prévu. À tout moment, vous pouvez redémarrer le processus en saisissant rs
et en appuyant sur ENTRÉE
.
Sinon, nodemon
recherchera également le fichier main
spécifié dans le fichier package.json
de votre projet.
{
// ...
"main": "server.js",
// ...
}
Ou, un script start
:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
Une fois que vous avez apporté vos modifications à package.json
, vous pouvez alors appeler nodemon
pour lancer l’application exemple en mode de surveillance sans avoir à passer par server.js
.
Vous pouvez modifier les paramètres de configuration disponibles sur nodemon
.
Passons en revue quelques-unes des principales options :
--exec
: utilisez le commutateur --exec
pour spécifier le binaire avec lequel exécuter le fichier. Par exemple, combiné au binaire ts-node
, --exec
peut s’avérer utile pour surveiller les modifications et exécuter les fichiers TypeScript.--ext
: spécifiez les différentes extensions de fichier à surveiller. Pour cette commutation, vous devez fournir une liste séparée par une virgule des extensions de fichiers (par exemple, --ext js,ts
).--delay
: lorsqu’un fichier est modifié, nodemon
attend par défaut 1 seconde pour redémarrer le processus. Cependant, vous pouvez utiliser le commutateur --delay
pour modifier ce délai. Par exemple, avec nodemon --delay 3.2
votre délai sera de 3,2 secondes.--watch
: utilisez le commutateur --watch
pour une surveillance sur plusieurs répertoires ou fichiers. Activez un commutateur --watch
pour chaque répertoire que vous souhaitez surveiller. Par défaut, le répertoire actuel et ses sous-répertoires sont surveillés. Donc, vous pouvez utiliser --watch
pour limiter la surveillance à des sous-répertoires ou des fichiers spécifiques.--ignore
: utilisez le commutateur --ignore
pour ignorer certains fichiers, modèles de fichiers ou répertoires.--verbose
: un résultat plus complet avec des informations sur le ou les fichiers modifiés pour déclencher un redémarrage.Vous pouvez consulter toutes les options disponibles avec la commande suivante :
- nodemon --help
À l’aide de ces options, créons la commande qui satisfait au scénario suivant :
server
.ts
.test.ts
.server/server.ts
) avec ts-node
.- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
Cette commande combine les options --watch
, --ext
, --exec
, --ignore
, et --delay
pour satisfaire aux conditions de notre scénario.
Dans l’exemple précédent, l’ajout des commutateurs de configuration lors de l’exécution de nodemon
est un exercice qui peut s’avérer être très fastidieux. Il existe une meilleure solution pour les projets qui nécessitent des configurations spécifiques. Elle consiste à spécifier ces configurations dans un fichier nodemon.json
.
Par exemple, voici les mêmes configurations que celles de notre exemple précédent de ligne de commande, mais placées dans un fichier nodemon.json
:
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Notez que nous utilisons execMap
au lieu du commutateur --exec
. execMap
vous permet de spécifier les binaires à utiliser pour obtenir certaines extensions de fichier.
Ou alors, si vous ne souhaitez pas ajouter un fichier de configuration nodemon.json
à votre projet, vous pouvez ajouter ces configurations dans un fichier package.json
sous une clé nodemonConfig
.
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Une fois que vous avez apporté ces modifications à nodemon.json
ou package.json
, vous pouvez alors lancer nodemon
avec le script que vous souhaitez :
- nodemon server/server.ts
nodemon
récupérera les configurations et les utilisera. De cette façon, vos configurations pourront être enregistrées, partagées et répétées tout en évitant les erreurs de copier-coller ou de frappe dans la ligne de commande.
Cet article vous a permis d’apprendre à utiliser nodemon
avec vos applications Node.js. Cet outil aide à automatiser le processus d’arrêt et de démarrage d’un serveur Node afin de pouvoir consulter les changements.
Pour de plus amples informations sur les fonctionnalités et les erreurs de dépannage, consultez la documentation officielle.
Si vous souhaitez en savoir plus sur Node.js, veuillez consulter notre page thématique Node.js dans laquelle vous trouverez des exercices et des projets de programmation.
]]>En Node.js, debe reiniciar el proceso para que los cambios surtan efecto. Eso añade un otro paso a su flujo de trabajo para que los cambios surtan efecto. Puede eliminar este paso adicional usando nodemon
para reiniciar el proceso automáticamente.
nodemon
es una utilidad de interfaz de línea de comandos (CLI) desarrollada por @rem que envuelve su aplicación Node, vigila el sistema de archivos y reinicia automáticamente el proceso.
En este artículo, aprenderá cómo instalar, preparar y configurar nodemon
.
Si desea seguir este artículo, necesitará:
nodemon
Primero, deberá instalar nodemon
en su equipo. Instale la utilidad global o localmente en su proyecto usando nom o Yarn:
Puede instalar nodemon
globalmente con npm
:
- npm install nodemon -g
O con Yarn:
- yarn global add nodemon
También puede instalar nodemon
localmente con npm. Cuando realice una instalación local, podemos instalar nodemon
como dependencia dev con --save-dev
(o --dev
):
- npm install nodemon --save-dev
O con Yarn:
- yarn add nodemon --dev
Algo que debe tenerse en cuenta con una instalación local es que no podrá usar el comando nodemon
directamente desde la línea de comandos:
- Outputcommand not found: nodemon
Sin embargo, puede usarlo como parte de algunas secuencias de comandos de npm o con npx.
Con esto finalizará el proceso de instalación de nodemon
. A continuación, usaremos nodemon
con nuestros proyectos.
nodemon
Puede usar nodemon
para iniciar una secuencia de comandos de Node. Por ejemplo, si tenemos una configuración de servidor Express en un archivo server.js
, podemos iniciarlo y vigilar los cambios de esta forma:
- nodemon server.js
Puede pasar argumentos de la misma forma que si estuviese ejecutando la secuencia de comandos con Node:
- nodemon server.js 3006
Cada vez que realice un cambio a un archivo con una de las extensiones vigiladas por defecto (.js
, .mjs
, .json
, .coffee
o .litcoffee
) en el directorio actual o en un subdirectorio, el proceso se reiniciará.
Vamos a suponer que escribimos un archivo server.js
de ejemplo que da como resultado el mensaje Dolphin app listening on port ${port}!
.
Podemos ejecutar el ejemplo con nodemon
:
- nodemon server.js
Vemos el siguiente resultado del terminal:
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
Aunque nodemon
aún está ejecutándose, vamos a realizar un cambio al archivo server.js
para que dé como resultado el mensaje: Shark app listening on port ${port}
!.
Vemos el siguiente resultado adicional del terminal:
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
El resultado del terminal desde nuestra aplicación Node.js se muestra como se espera. Puede reiniciar el proceso en cualquier momento escribiendo rs
y presionando ENTER.
Como alternativa, nodemon
también buscará un archivo main
especificado en el archivo package.json
de su proyecto:
{
// ...
"main": "server.js",
// ...
}
O una secuencia de comandos start
:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
Una vez que realice los cambios a package.json
, puede invocar nodemon
para iniciar la aplicación de ejemplo en modo de vigilancia sin tener que pasar server.js
.
Puede modificar los ajustes de configuración disponibles para nodemon
.
Vamos a repasar algunas de las opciones principales:
--exec
: Utilice el interruptor --exec
para especificar un binario con el que ejecutar el archivo. Por ejemplo, cuando se combina con el binario ts-node
, --exec
puede ser útil para vigilar los cambios y ejecutar archivos TypeScript.--ext
: Especifique diferentes extensiones de archivo que vigilar. Para este interruptor, proporcione una lista separada por comas de extensiones de archivo (por ejemplo, --ext.js,ts
).--delay
: Por defecto, nodemon
espera un segundo para reiniciar el proceso cuando un archivo cambia, pero puede especificar un retraso diferente con el interruptor -
-delay. Por ejemplo, nodemon --delay 3.2
para un retraso de 3,2 segundos.--watch
: Utilice el interruptor --watch
para especificar múltiples directorios o archivos que vigilar. Añada un interruptor --watch
para cada directorio que desee vigilar. Por defecto, el directorio actual y sus subdirectorios se vigilan, de forma que con --watch
puede estrechar eso a solo subdirectorios o archivos específicos.--ignore
: Utilice el interruptor --ignore
para ignorar ciertos archivos, patrones de archivo o directorios.--verbose
: Un resultado con más texto con información sobre qué archivos cambiaron para activar un reinicio.Puede ver todos las opciones disponibles con el siguiente comando:
- nodemon --help
Usando estas opciones, vamos a crear el comando para satisfacer el siguiente escenario:
servidor
.test.ts
server/server.ts
) con ts-node
- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
Este comando combina las opciones --watch
, --ext
, --exec
, --ignore
y --delay
para satisfacer las condiciones de nuestro escenario.
En el ejemplo anterior, añadir interruptores de configuración cuando se ejecuta nodemon
puede ser algo tedioso. Una mejor solución para los proyectos que necesitan configuraciones específicas es especificar estas configuraciones en un archivo nodemon.json
.
Por ejemplo, aquí están las mismas configuraciones que en el ejemplo de línea de comandos anterior, pero colocados en un archivo nodemon.json
:
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Observe el uso de execMap
en vez del interruptor --exec
. execMap
le permite especificar binarios que deberían usarse dadas algunas extensiones de archivo.
Alternativamente, si prefiere no añadir un archivo de configuración nodemon.json
a su proyecto, puede añadir estas configuraciones al archivo package.json
bajo la clave nodemonConfig
:
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Una vez que realice los cambios a nodemon.json
o package.json
, puede iniciar nodemon
con la secuencia de comandos deseada:
- nodemon server/server.ts
nodemon
recogerá las configuraciones y las utilizará. De esta forma puede guardar, compartir y repetir sus configuraciones para evitar copiar y pegar o escribir errores en la línea de comandos.
En este artículo, ha explorado cómo usar nodemon
con sus aplicaciones Node.js. Esta herramienta ayuda a automatizar el proceso de detener e iniciar un servidor Node para ver los cambios.
Para obtener más información sobre las funciones disponibles y resolver problemas, consulte la documentación oficial.
Si desea saber más sobre Node.js, consulte nuestra página del tema Node.js para consultar ejercicios y proyectos de programación.
]]>In Node.js müssen Sie den Prozess neu starten, um Änderungen zu übernehmen. Dadurch wird Ihrem Workflow ein zusätzlicher Schritt hinzugefügt, um die Änderungen durchzuführen. Sie können diesen zusätzlichen Schritt durch Verwendung von nodemon
eliminieren, um den Prozess automatisch neu zu starten.
nodemon
ist ein von @rem entwickeltes CLI-Dienstprogramm (Command Line Interface), das Ihre Node-App umschließt, das Dateisystem überwacht und den Prozess automatisch neu startet.
In diesem Artikel erfahren Sie mehr über die Installation, Einrichtung und Konfiguration von nodemon
.
Wenn Sie diesem Artikel folgen möchten, benötigen Sie Folgendes:
nodemon
Zuerst müssen Sie nodemon
auf Ihrem Rechner installieren. Installieren Sie das Dienstprogramm entweder global oder lokal mit npm oder Yarn:
Sie können nodemon
global mit npm
installieren:
- npm install nodemon -g
Oder mit Yarn:
- yarn global add nodemon
Sie können nodemon
auch lokal mit npm installieren. Bei der Ausführung einer lokalen Installation können wir nodemon
als dev-Abhängigkeiten mit --save-dev
(oder --dev
) installieren:
- npm install nodemon --save-dev
Oder mit Yarn:
- yarn add nodemon --dev
Eine Sache, die Sie bei einer lokalen Installation wissen sollten, ist, dass Sie den Befehl nodemon
nicht direkt aus der Befehlszeile verwenden können:
- Outputcommand not found: nodemon
Sie können es jedoch als Teil von einigen npm Scripts oder mit npx verwenden.
Dadurch wird der Prozess der Installation von nodemon
abgeschlossen. Als Nächstes verwenden wir nodemon
mit unseren Projekten.
nodemon
Wir können nodemon
verwenden, um ein Node Script zu starten. Wenn wir beispielsweise ein Express-Server-Setup in einer server.js
-Datei haben, können wir es starten und für Änderungen wie folgt ansehen:
- nodemon server.js
Sie können Argumente so übergeben, als ob Sie das Script mit Node ausführen:
- nodemon server.js 3006
Jedesmal, wenn Sie eine Änderung in einer Datei mit einer der Standarderweiterung (.js
, .mjs
, .json
, .coffee
oder .litcoffee
) im aktuellen Verzeichnis oder einem Unterverzeichnis vornehmen, wird der Prozess neu starten.
Nehmen wir an, wir schreiben eine Beispieldatei server.js
, die die Nachricht ausgibt: Dolphin-App hört auf Port ${port} zu
.
Wir können das Beispiel mit nodemon
ausführen:
- nodemon server.js
Wir sehen die Terminalausgabe:
Output[nodemon] 1.17.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Dolphin app listening on port 3000!
Zwar wird nodemon
noch immer ausgeführt, doch lassen Sie uns eine Änderung in der Datei server.js
vornehmen, um die Nachricht auszugeben: Shark-App hört auf Port ${port} zu!
Wir sehen die folgende zusätzliche Terminalausgabe:
Output[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Shark app listening on port 3000!
Die Terminalausgabe aus unserer Node.js-App wird wie erwartet angezeigt. Sie können den Prozess jederzeit neu starten, indem Sie rs
eingeben und die ENTER
drücken.
Alternativ sucht nodemon
auch nach einer Hauptdatei
, die in der Datei package.json
Ihres Projekts angegeben ist:
{
// ...
"main": "server.js",
// ...
}
Oder ein Startskript
:
{
// ...
"scripts": {
"start": "node server.js"
},
// ...
}
Sobald Sie die Änderungen an package.json
vornehmen, können Sie nodemon
aufrufen, um die Beispiel-App im Beobachtungsmodus zu starten, ohne dass Sie server.js
übergeben müssen.
Sie können die Konfigurationseinstellungen für nodemon
ändern.
Gehen wir über einige der wichtigsten Optionen:
--exec
: Verwenden Sie den Schalter --exec
, um ein Binärsystem anzugeben, mit dem die Datei ausgeführt werden soll. In Kombination mit der Binärdatei ts-node
kann --exec
beispielsweise nützlich werden, um Änderungen zu beobachten und TypeScript-Dateien auszuführen.--ext
: Geben Sie verschiedene Dateierweiterungen an, um zu beobachten. Stellen Sie für diesen Schalter eine mit Komma getrennte Liste der Dateierweiterungen (z. B. -ext js,ts
) bereit.--delay
: Standardmäßig wartet nodemon
eine Sekunde, um den Prozess neu zu starten, wenn sich eine Datei ändert, aber mit dem Schalter --delay
können Sie eine andere Verzögerung angeben. Beispielsweise nodemon --delay 3.2
für eine 3,2-Sekunden-Verzögerung.--watch
: Verwenden Sie den Schalter --watch
, um mehrere Verzeichnisse oder Dateien anzugeben, die Sie beobachten können. Fügen Sie für jedes Verzeichnis, das Sie beobachten möchten, einen --watch
-Schalter hinzu. Standardmäßig werden das aktuelle Verzeichnis und seine Unterverzeichnisse beobachtet, sodass Sie mit --watch
die Beobachtung auf nur bestimmte Unterverzeichnisse oder Dateien beschränken können.--ignore
: Verwenden Sie den Schalter --ignore
, um bestimmte Dateien, Dateimuster oder Verzeichnisse zu ignorieren.--verbose
: Eine ausführlichere Ausgabe mit Informationen darüber, welche Datei(en) geändert wurde(n), um einen Neustart auszulösen.Sie können mit dem folgenden Befehl alle verfügbaren Optionen anzeigen:
- nodemon --help
Durch Verwendung dieser Optionen erstellen wir den Befehl, um das folgende Szenario zu erfüllen:
Server
-Verzeichnisses.ts
-Erweiterung.test.ts
-Endungserver/server.ts
) mit ts-node
- nodemon --watch server --ext ts --exec ts-node --ignore '*.test.ts' --delay 3 server/server.ts
Dieser Befehl kombiniert --watch
, --ext
, --exec
, --ignore
und --delay
-Optionen, um die Bedingungen für unser Szenario zu erfüllen.
Im vorherigen Beispiel kann das Hinzufügen von Konfigurationsschaltern bei der Ausführung von nodemon
ziemlich mühsam werden. Eine bessere Lösung für Projekte, die spezifische Konfigurationen benötigen, ist die Angabe dieser Konfigurationen in einer Datei nodemon.json
.
Beispielsweise sind hier die gleichen Konfigurationen wie bei der vorherigen Befehlszeile, aber in einer Datei nodemon.json
platziert:
{
"watch": ["server"],
"ext": "ts",
"ignore": ["*.test.ts"],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
}
Beachten Sie die Verwendung von execMap
anstelle des Schalters --exec
. execMap
ermöglicht es Ihnen, Binärdateien anzugeben, die bei bestimmten Dateierweiterungen verwendet werden sollten.
Wenn Sie Ihrem Projekt lieber keine Konfigurationsdatei nodemon.json
hinzufügen möchten, können Sie alternativ diese Konfigurationen unter einem Schlüssel nodemonConfig
der Datei package.json
hinzufügen:
{
"name": "test-nodemon",
"version": "1.0.0",
"description": "",
"nodemonConfig": {
"watch": [
"server"
],
"ext": "ts",
"ignore": [
"*.test.ts"
],
"delay": "3",
"execMap": {
"ts": "ts-node"
}
},
// ...
Sobald Sie die Änderungen an entweder nodemon.json
oder package.json
vornehmen, können Sie nodemon
mit dem gewünschten Script starten:
- nodemon server/server.ts
nodemon
nimmt die Konfigurationen auf und verwendet sie. Auf diese Weise können Ihre Konfigurationen gespeichert, geteilt und wiederholt werden, um Fehler beim Kopieren und Einfügen oder Tippfehler in der Befehlszeile zu vermeiden.
In diesem Artikel haben Sie erkundet, wie Sie nodemon
mit Ihren Node.js-Anwendungen verwenden. Dieses Tool hilft dabei, den Prozess des Anhaltens und Startens eines Node-Servers zu automatisieren, um die Änderungen anzuzeigen.
Weitere Informationen zu den verfügbaren Funktionen und Fehlerbehebungen finden Sie in der offiziellen Dokumentation.
Wenn Sie mehr über Node.js erfahren möchten, lesen Sie unsere Themenseite zu Node.js für Übungen und Programmierprojekte.
]]>While you may upload images on the frontend, you would need to implement an API and database on the backend to receive them. With Multer and Express, a Node.js framework, you can establish file and image uploads in one setting.
In this article, you will learn how to upload images with a Node.js backend using Multer and Express.
An understanding of Node.js is recommended. To learn more about Node.js, check out our How To Code in Node.js series.
A general understanding of HTTP request methods in Express is suggested. To learn more about HTTP request methods, check out our How To Define Routes and HTTP Request Methods in Express tutorial.
As Express is a Node.js framework, ensure that you have Node.js installed from Node.js prior to following the next steps. Run the following in your terminal:
Create a new directory named node-multer-express
for your project:
- mkdir node-multer-express
Change into the new directory:
- cd node-multer-express
Initialize a new Node.js project with defaults. This will include your package.json
file to access your dependencies:
- npm init
Create your entry file, index.js
. This is where you will handle your Express logic:
- touch index.js
Install Multer, Express, and morgan as dependencies:
- npm install multer express morgan --save
Multer is your image upload library and manages accessing form data from an Express request. morgan is Express middleware for logging network requests.
To set up your Multer library, use the .diskStorage()
method to tell Express where to store files to the disk. In your index.js
file, require Multer and declare a storage
variable and assign its value the invocation of the .diskStorage()
method:
const multer = require('multer');
const storage = multer.diskStorage({
destination: function(req, file, callback) {
callback(null, '/src/my-images');
},
filename: function (req, file, callback) {
callback(null, file.fieldname);
}
});
The destination
property on the diskStorage()
method determines which directory the files will store. Here, the files will store in the directory, my-images
. If you’ve not applied a destination
, the operating system will default to a directory for temporary files.
The property filename
indicates what to name your files. If you do not set a filename, Multer will return a randomly generated name for your files.
Note: Multer does not add extensions to file names, and it’s recommended to return a filename complete with a file extension.
With your Multer setup complete, let’s combine it within your Express server.
Your Express server is where you handle the logic for HTTP request methods, the request
and response
lifecycle methods, and where you can implement the dependencies Multer and morgan for file and image transfer.
In your index.js
file, declare an app
variable and assign its value an Express instance. Require in Multer and morgan, and declare an upload
variable to store a Multer instance:
import morgan from 'morgan';
import express from 'express';
const app = express();
const multer = require('multer');
const upload = multer({dest: 'uploads/'});
app.use(express.json());
app.use(express.urlencoded({extended: true}));
app.use(morgan('dev'));
app.use(express.static(__dirname, 'public'));
You’ll operate the Express middleware, .use()
, to pass in the .json()
middleware to parse your incoming responses as a JSON object. As well, .use()
accepts an invocation of morgan and the argument 'dev'
. This tells Express to use morgan’s development environment to alert you of response status. To create static files, transfer in the Express middleware .static()
to .use()
and define the directory containing your images as an argument.
Once you’ve set your global variables, set a POST
request that accepts an anonymous route, and the req
and response
callback to receive new files and images:
app.post('/', upload.single('file'), (req, res) => {
if (!req.file) {
console.log("No file received");
return res.send({
success: false
});
} else {
console.log('file received');
return res.send({
success: true
})
}
});
When the anonymous route receives a file or image, Multer will save them to your specified directory. The second argument in your POST
request, upload.single()
is a built-in Multer method to save a file with a fieldname
property and store it in the Express req.file
object. The fieldname
property is defined on your Multer .diskStorage()
method.
Should you integrate a database, you can require the filename in your index.js
file:
const host = req.host;
const filePath = req.protocol + "://" + host + '/' + req.file.path;
Save the variable filePath
to the database, and operate your database with the incoming file names.
Express provides you a process to save and store incoming files and images into your server. The middleware dependency Multer streamlines your form data to handle multiple file uploads.
If you’d like to learn more about Node.js, take a look at our How To Code in React.js series, or check out our Node.js topic page for exercises and programming projects.
]]>When the browser loads a page, it executes a lot of code to render the content. The code could be from the same origin as the root document, or a different origin. By default, the browser does not distinguish between the two and executes any code requested by a page regardless of the source. Attackers use this exploit to maliciously inject scripts to the page, which are then executed because the browser has no way of determining if the content is harmful. These situations are where a Content Security Policy (CSP) can provide protection.
A CSP is an HTTP header that provides an extra layer of security against code-injection attacks, such as cross-site scripting (XSS), clickjacking, and other similar exploits. It facilitates the creation of an “allowlist” of trusted content and blocks the execution of code from sources not present in the allowlist. It also reports any policy violations to a URL of your choice, so that you can keep abreast of potential security attacks.
With the CSP header, you can specify approved sources for content on your site that the browser can load. Any code that is not from the approved sources, will be blocked from executing, which makes it considerably more difficult for an attacker to inject content and siphon data.
In this tutorial, you’ll review the different protections the CSP header offers by implementing one in an example Node.js application. You’ll also collect JSON reports of CSP violations to catch problems and fix exploits quickly.
To follow this tutorial, you will need the following:
You should also use a recent browser version, preferably Chrome, as it has the best support for CSP level 3 directives at the time of writing this article (November 2020). Also, make sure to disable any third-party extensions while testing the CSP implementation so that they don’t interfere with the violation reports rendered in the console.
To demonstrate the process of creating a Content Security Policy, we’ll work through the entire process of implementing one for this demo project. It’s a one-page website with a variety of content that approximates a typical website or application. It includes a small Vue.js application, YouTube embeds, and some images sourced from Unsplash. It also uses Google fonts and the Bootstrap framework, which is loaded over a content delivery network (CDN).
In this step, you’ll set up the demo project on your test server or local machine and view it in your browser.
First, clone the project to your filesystem using the following command:
- git clone https://github.com/do-community/csp-demo
Once your project directory is set up, change into it with the following command:
- cd csp-demo
Next, install the dependencies specified in the package.json
file with the next command. You use the express
package to set up the web server, while nodemon
helps to automatically restart the node application when it detects file changes in the directory:
- npm install
Once the dependencies you’ve installed the dependencies, enter the following command to start the web server on port 5500
:
- npm start
You can now visit your_server_ip:5500
or localhost:5500
in your browser to view the demo page. You will find the text Hello World!, a YouTube embed, and some images on the page.
In the next section, we’ll implement a CSP policy that covers only the most basic protections. We’ll then build on that in the subsequent sections as we uncover all the legitimate resources that we need to allow on the page.
Let’s go ahead and write a CSP policy that restricts fonts, images, scripts, styles, and embeds to those originating from the current host only. The following is the response header that achieves this:
Content-Security-Policy: default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; frame-src 'self';
Here’s an explanation of the policy directives in this header:
font-src
defines the sources from where fonts can be loaded from.img-src
defines the sources from which image loading is permitted.script-src
controls any script-loading privileges on a web page.style-src
is the directive for allowing stylesheet sources.frame-src
defines allowed sources for frame embeds. (It was deprecated in CSP level 2, but reinstated in level 3.)default-src
defines a fallback policy for certain directives if they are not explicitly specified in the header. Here is a complete list of the directives that fall back to default-src
.In this example, all the specified directives are assigned the 'self'
keyword in their source list. This indicates that only resources from the current host (including the URL scheme and port number) should be allowed to execute. For example, script-src 'self'
allows the execution of scripts from the current host, but it blocks all other script sources.
Let’s go ahead and add the header to our Node.js project.
Leave your app running and open a new terminal window to work with your server.js
file:
- nano server.js
Next, add the CSP header from the example in an Express middleware layer. This ensures that you’re including the header in every response from the server:
const express = require('express');
const bodyParser = require('body-parser');
const path = require('path');
const app = express();
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy',
"default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; frame-src 'self'"
);
next();
});
app.use(bodyParser.json());
app.use(express.static(path.join(__dirname)));
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname + '/index.html'));
});
const server = app.listen(process.env.PORT || 5500, () => {
const { port } = server.address();
console.log(`Server running on PORT ${port}`);
});
Save the file and reload the project in your browser. You’ll notice that the page is completely broken.
Our CSP header is working as expected and all the external sources that we included on the page have been blocked from loading because they violate the defined policy. However, this is not an ideal way to test a brand-new policy since it can break a website when violations occur.
This is why the Content-Security-Policy-Report-Only
header exists. You can use it instead of Content-Security-Policy
to prevent the browser from enforcing the policy, while still reporting the violations that occur—this means that you can refine the policy without putting your site at risk. Once you’re happy with your policy, you can switch back to the enforcing header so that the protections are activated.
Go ahead and replace the Content-Security-Policy
header with Content-Security-Policy-Report-Only
in your server.js
file:
- nano server.js
Add the following highlighted code:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; frame-src 'self'"
);
next();
});
. . .
Save the file and reload the page in your browser. The page returns to a working state, but the browser console still reports the CSP violations. Each violation is prefixed with [Report Only]
to indicate that the policy is not enforced.
In this section, we created the initial implementation of our CSP and set it to report-only mode so that we can refine it without causing the site to break. In the next section, we’ll fix the violations triggered through our initial CSP.
The policy we implemented in the previous section triggered several violations because we restricted all resources to the origin only—however, we have several third-party assets on the page.
The two ways to fix CSP violations are: approving the sources in the policy, or removing the code that triggers the violations. Since legitimate resources are triggering all the violations, we’ll concentrate mainly on the former option in this section.
Open up your browser console. It will display all the current violations of the CSP. Let’s fix each of these issues.
The first two violations in the console are from the Google fonts and Bootstrap stylesheets, which you’re loading from https://fonts.googleapis.com
and https://cdn.jsdelivr.net
respectively. You can allow them both on the page through the style-src
directive:
style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net;
This specifies that CSS files from the origin host, https://fonts.googleapis.com
, and https://cdn.jsdelivr.net
, should be executed on the page. This policy is quite broad, because it allows any stylesheet from the allowlist domains (not just the ones you’re currently using).
We can be more specific by using the exact file or directory we’d like to allow instead:
style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css;
Now, it will only allow the exact, specified stylesheet to execute. It will block all other stylesheets—even if they originate from https://cdn.jsdelivr.net
.
You can update the CSP header like the following with the updated style-src
directive. By the time you reload the page, both violations will be resolved:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self';"
);
next();
});
. . .
The images you’re using on the page are from a single source: https://images.unsplash.com
. Let’s allow it through the img-src
directive like the following:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self'; img-src 'self' https://images.unsplash.com; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self'"
);
next();
});
. . .
You can allow valid sources for nested browsing contexts, which use elements such as <iframe>
, through the frame-src
directive. If this directive is absent, the browser will look for the child-src
directive, which subsequently falls back to the default-src
directive.
Our current policy limits frame embeds to the origin host. Let’s add https://www.youtube.com
to the allowlist so that the CSP doesn’t block Youtube embeds from loading once we enforce the policy:
frame-src 'self' https://www.youtube.com;
Note that the www
subdomain is significant here. If you have an embed from https://youtube.com
, it will be blocked according to this policy unless you also add https://youtube.com
to the allowlist:
frame-src 'self' https://www.youtube.com https://youtube.com;
Here’s the updated CSP header. Change the frame-src
directive as following:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self'; img-src 'self' https://images.unsplash.com; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
);
next();
});
. . .
The Google fonts stylesheet contain references to several font files from https://fonts.gstatic.com
. You will find that these files currently violate the defined font-src
policy (currently 'self'
), so you need to account for them in your revised policy:
font-src 'self' https://fonts.gstatic.com;
Update the font-src
directive in the CSP header as following:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
);
next();
});
. . .
Once you reload the page, the console will no longer report violations for the Google fonts files.
A Vue.js script loaded over a CDN is rendering the Hello world! text at the top of the page. We’ll allow its execution on the page through the script-src
directive. As mentioned earlier, it’s important to be specific when allowing CDN sources so we don’t open up our site to other possible malicious scripts that are hosted on that domain.
script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/vue.min.js;
In this example, you’re only allowing the exact script at that URL to execute on the page. If you want to switch between development and production builds, you’ll need to add the other script URL to the allowlist as well, or you can allow all the resources present in the /dist
location, to cover both cases at once:
script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/;
Here’s the updated CSP header with the relevant changes:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
);
next();
});
. . .
At this point, we’ve successfully allowed all the external files and scripts that our page relies on. But we still have one more CSP violation to resolve due to the presence of an inline script on the page. We’ll explore a few solutions to this problem in the next section.
Although you can approve inline code (such as JavaScript code in a <script>
tag) within a CSP using the 'unsafe-inline'
keyword, it is not recommended because it greatly increases the risk of a code-injection attack.
This example policy allows the execution of any inline script on the page, but this is not safe for the aforementioned reasons.
script-src 'self' 'unsafe-inline' https://unpkg.com/vue@3.0.2/dist/;
The best way to avoid using unsafe-inline
is to move the inline code to an external file and reference it that way. This is a better approach for caching, minification, and maintainability, and it also makes the CSP easier to modify in the future.
However, if you absolutely must use inline code, there are two major ways to add them to your allowlist safely.
This method requires that you calculate a SHA hash that is based on the script itself and then add it to the script-src
directive. In recent versions of Chrome, you don’t even need to generate the hash yourself as it’s already included in the CSP violation error in the console:
[Report Only] Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' https://unpkg.com/vue@3.0.2/dist/". Either the 'unsafe-inline' keyword, a hash ('sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='), or a nonce ('nonce-...') is required to enable inline execution.
The highlighted section here is the exact SHA256 hash that you would need to add to the script-src
directive to allow the execution of the specific inline script that triggered the violation.
Copy the hash and add it to your CSP as follows:
script-src 'self' https://unpkg.com/vue@3.0.2/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM=';
The disadvantage to this approach is that if the contents of the script changes, the generated hash will be different, which will trigger a violation.
The second way to allow the execution of inline code is by using a nonce. These are random strings that you can use to allow a complete block of code regardless of its content.
Here’s an example of a nonce value in use:
script-src 'self' https://unpkg.com/vue@3.0.2/dist/ 'nonce-EDNnf03nceIOfn39fn3e9h3sdfa'
The value of the nonce in the CSP must match the nonce
attribute on the script:
<script nonce="EDNnf03nceIOfn39fn3e9h3sdfa">
// Some inline code
</script>
Nonces must be unguessable and dynamically generated each time the page is loaded so that an attacker is unable to use them for the execution of a malicious script. If you decide to implement this option, you can use the crypto
package to generate a nonce as following:
const crypto = require('crypto');
let nonce = crypto.randomBytes(16).toString('base64');
We will opt for the hash method in this tutorial since it’s more practical for our use case.
Update the script-src
directive in your CSP header to include the SHA256 hash of the sole inline script as shown following:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://unpkg.com/vue@3.0.2/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
);
next();
});
. . .
This removes the final CSP violation error that the inline script triggers from the console.
In the next section, we will monitor the effects of our CSP in a production environment.
Once you have your CSP in place, it’s necessary to keep an eye on its effect once in use. For example, if you forget to allow a legitimate source in production or when an attacker is trying to exploit an XSS attack vector (which you need to identify and stop immediately).
Without some form of active reporting in place, there’s no way to know of these events. This is why the report-to
directive exists. It specifies a location that the browser should POST a JSON-formatted violation report to, in the event it has to take action based on the CSP.
To use this directive, you need to add an additional header to specify an endpoint for the Reporting API:
Report-To: {"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"http://your_server_ip:5500/__cspreport__"}],"include_subdomains":true}
Once that is set, specify the group name in the report-to
directive as following:
report-to csp-endpoint;
Here’s the updated portion of the server.js
file with the changes. Be sure to replace the <your_server_ip>
placeholder in the Report-To
header with your actual server IP address:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Report-To',
'{"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"https://<your_server_ip>:5500/__cspreport__"}],"include_subdomains":true}'
);
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-to csp-endpoint;"
);
next();
});
. . .
The report-to
directive is intended to replace the now deprecated report-uri
directive, but most browsers don’t support it yet (as of November 2020). So, for compatibility with current browsers while also ensuring compatibility with future browser releases support, you should specify both report-uri
and report-to
in your CSP. If the latter is supported, it will ignore the former:
. . .
app.use(function (req, res, next) {
res.setHeader(
'Report-To',
'{"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"https://your_server_ip:5500/__cspreport__"}],"include_subdomains":true}'
);
res.setHeader(
'Content-Security-Policy-Report-Only',
"default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-to csp-endpoint; report-uri /__cspreport__;"
);
next();
});
. . .
The /__cspreport__
route needs to exist on the server as well; add this to your file like the following:
. . .
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname + '/index.html'));
});
app.post('/__cspreport__', (req, res) => {
console.log(req.body);
});
. . .
Some browsers send the Content-Type
of the report payload as application/csp-report
, while others use application/json
. If the report-to
directive is supported, the Content-Type
should be application/reports+json
.
To account for all the possible Content-Type
values, you have to set up some configurations on your express server:
. . .
app.use(
bodyParser.json({
type: ['application/json', 'application/csp-report', 'application/reports+json'],
})
);
. . .
At this point, any CSP violations will be sent to the /__cspreport__
route and subsequently logged to the terminal.
You can try it out by adding a resource from a source that is not compliant with the current CSP, or modifying the inline script in the index.html
file as shown following:
. . .
<script>
new Vue({
el: '#vue',
render(createElement) {
return createElement('h1', 'Hello World!');
},
});
console.log("Hello")
</script>
. . .
This will trigger a violation because the hash of the script is now different from what you included in the CSP header.
Here’s a typical violation report from a browser using the report-uri
:
{
'csp-report': {
'document-uri': 'http://localhost:5500/',
referrer: '',
'violated-directive': 'script-src-elem',
'effective-directive': 'script-src-elem',
'original-policy': "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-uri /__cspreport__;",
disposition: 'report',
'blocked-uri': 'inline',
'line-number': 58,
'source-file': 'http://localhost:5500/',
'status-code': 200,
'script-sample': ''
}
}
The parts to this report are:
document-uri
: The page the violation occurred on.referrer
: The page’s referrer.blocked-uri
: The resource that violated the page’s policy (an inline
script in this case).line-number
: The line number where the inline code begins.violated-directive
: The specific directive that was violated. (script-src-elem
in this case, which falls back to script-src
.)original-policy
: The complete policy of the page.If the browser supports the report-to
directive, the payload should have a similar structure to what is following. Notice how it’s different from the report-uri
payload while still carrying the same information:
[{
"age": 16796,
"body": {
"blocked-uri": "https://vimeo.com",
"disposition": "enforce",
"document-uri": "https://localhost:5500/",
"effective-directive": "frame-src",
"line-number": 58,
'original-policy': "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-uri /__cspreport__;",
"referrer": "",
"script-sample": "",
"sourceFile": "https://localhost:5500/",
"violated-directive": "frame-src"
},
"type": "csp",
"url": "https://localhost:5500/",
"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"
}]
Note: The report-to
directive is supported only in secure contexts, which means that you need to set up your Express server with a valid HTTPS certificate, otherwise you won’t be able to test or use it.
In this section, we successfully set up CSP monitoring on our server so that we can detect and fix problems quickly. Let’s go ahead and finish this tutorial by enforcing the final policy in the next step.
Once you’re confident your CSP is set up correctly (ideally after leaving it in production for a few days or weeks in report-only mode), you can enforce it by changing the CSP header from Content-Security-Policy-Report-Only
to Content-Security-Policy
.
In addition to reporting the violations, this will stop unauthorized resources from being executed on the page, leading to a safer experience for your visitors.
Here is the final version of the server.js
file:
const express = require('express');
const bodyParser = require('body-parser');
const path = require('path');
const app = express();
app.use(function (req, res, next) {
res.setHeader(
'Report-To',
'{"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"http://your_server_ip:5500/__cspreport__"}],"include_subdomains":true}'
);
res.setHeader(
'Content-Security-Policy',
"default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-to csp-endpoint; report-uri /__cspreport__;"
);
next();
});
app.use(
bodyParser.json({
type: [
'application/json',
'application/csp-report',
'application/reports+json',
],
})
);
app.use(express.static(path.join(__dirname)));
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname + '/index.html'));
});
app.post('/__cspreport__', (req, res) => {
console.log(req.body);
});
const server = app.listen(process.env.PORT || 5500, () => {
const { port } = server.address();
console.log(`Server running on PORT ${port}`);
});
Browser support
The CSP header is supported in all browsers with the exception of Internet Explorer, which uses the non-standard X-Content-Security-Policy
header instead. If you need to support IE, you have to issue the CSP twice in the response headers.
The latest version of the CSP spec (level 3) also introduced some newer directives that are not well supported at the moment. Examples include the script-src-elem
and prefetch-src
directives. Make sure to use the appropriate fallbacks when setting up those directives to ensure the protections remain active in browsers that have not caught up yet.
In this article, you’ve set up an effective Content Security Policy for a Node.js application and monitored the violations. If you have an older or more complex website, it will require a wider policy setup that covers all the bases. However, setting up a thorough policy is worth the effort as it makes it a lot harder for an attacker to exploit your website to steal user data.
You can find the final code from this tutorial in this GitHub repository.
For more security-related articles, check out our Security topic page. If you would like to learn more about working with Node.js, you can read our How To Code in Node.js series.
]]>In this article, you will learn about the res
object in Express. Short for response
, the res
object is one half of the request
and response
cycle to send data from the server to the client-side through HTTP requests.
An understanding of Node.js is helpful but not required. To learn more about Node.js, check out our How To Code in Node.js series.
A general knowledge of HTTP requests. To learn more about HTTP Requests, check out our tutorial on How To Define Routes and HTTP Request Methods in Express.
.status()
and .append()
MethodsThe .send()
method on the res
object forwards any data passed as an argument to the client-side. The method can take a string, array, and an object as an argument.
In your index.js
file, implement a GET
request with the route '/home'
:
app.get('/home', (req, res) => {
res.send('Hello World!'))
});
Notice that the GET
request takes a callback argument with req
and res
as arguments. You can use the res
object within the GET
request to send the string Hello World!
to the client-side.
The .send()
method also defines its own built-in headers natively, depending on the Content-Type
and Content-Length
of the data.
The res
object can also specify HTTP status codes with the .status()
method. In your index.js
file, integrate the .status()
method on the res
object and pass in an HTTP status code as an argument:
res.status(404).send('Not Found');
The .status()
method on the res
object will set a HTTP status code of 404
. To send the status code to the client-side, you can method chain using the .send()
method. The status code 404
tells the client side that the data requested is not found.
The .sendStatus()
method is a shorthand syntax to adapt the functionality of both the .status()
and .send()
methods:
res.sendStatus(404);
Here, the .sendStatus()
method will set the HTTP status code 404
and send it to the client-side in one call.
HTTP status codes summarizes your Express server’s response. Browsers rely on HTTP status codes to inform the client-side whether a specified data exists or if an internal server error occurs.
To define a header in your server response, apply the .append()
method. In your index.js
file, pass in a header as the first argument and a value as the second in your call to .append()
:
res.append('Content-Type', 'application/javascript; charset=UTF-8');
res.append('Connection', 'keep-alive')
res.append('Set-Cookie', 'divehours=fornightly')
res.append('Content-Length', '5089990');
In one line of code, the .append()
method accepts standard and non-standard headers in your server response.
.redirect()
, .render()
, and .end()
MethodsThe redirect()
method on the res
object will direct the client side to a different page. If a user inputs their login credentials on the client-side, the .redirect()
method will facilitate the switch to their access page.
In your index.js
file, set a .redirect()
method on the res
object:
res.redirect('/sharks/shark-facts')
Here, the .redirect()
method will forward the client side to the route '/sharks/shark-facts'
.
The .render()
method accepts an HTML file as an argument and sends it to the client-side. The method also accepts an optional second argument, a locals object, with custom properties to define the file sent to the client-side.
In your index.js
file, implement a GET
request with the route '/shark-game'
:
[label index.js]
app.get('/shark-game', (req, res) => {
res.render('shark.html', {status: 'good'});
});
Using the .render()
method on the res
object will send the HTML file shark.html
and the local object with the status
property to the client-side.
The .end()
method will terminate the response cycle. It is recommended to use the .end()
method as the last call in your response to the client-side.
In your index.js
file, set a .sentStatus()
method chained with .end()
:
res.sendStatus(404).end();
The .end()
method will complete the response once the HTTP status code 404
sets and sends it to the client-side.
The req
object not only facilitates data transfer but also with files. Let’s look at other methods the req
object contains for file management.
res
ObjectTo send HTML, CSS, and JavaScript files to the client side, use the .sendFile()
method on the res
object. In your index.js
file, set a GET
request to the route '/gallery/:fileName'
:
// GET https://sharks.com/gallery/shark-image.jpg
app.get('/gallery/:fileName', function (req, res, next) {
var options = {
root: path.join(__dirname, 'public')
};
res.sendFile(req.params.fileName, options, function (err) {
if (err) next(err);
else console.log('Sent:', fileName);
});
});
Within the GET
request, the variable options
takes an object and the public
directory as the value in a call to path.join()
to serve as an absolute path. Contents in the public
directory include HTML, CSS, and JavaScript files. The call .sendFile
method takes the options
variable as a second argument and sets an error handler as the third. This will send the files stored in the public
directory to the client-side.
You can also facilitate file handling with the .download()
method on the res
object. In your index.js
file, implement a GET
request to the route '/gallery/:fileName'
:
[label index.js]
// GET https://sharkss.com/gallery/shark-image.jpg
app.get('/gallery/:fileName', function(req, res){
const file = `${__dirname}/public/${req.params.fileName}`;
res.download(file);
});
The .download()
method sends and prompts the client-side to download a file and sets appropriate headers for the file type in one call.
To outline the value of a Content-Header
on a file type, use the .type()
method on the res
object. In your index.js
file, set a .type()
method on the res
object and pass a file type as an argument:
res.type('png') // => 'image/png'
res.type('html') // => 'text/html'
res.type('application/json') // =>'application/json'
The .type()
method will output the file type with their associated values in a Content-Header
.
The res
object holds methods to facilitate data and file transfer as part of your response
cycle from your Express server to the client-side. To get comprehensive information about the res
object, visit the Express.js official documentation website.
This is a cheat sheet that you can use as a handy reference for npm & Yarn commands.
For a more comprehensive overview of npm, explore our tutorial How To Use Node.js Modules with npm and package.json .
There are many similarities between npm and Yarn. Yarn (released 2016) drew considerable inspiration from npm (2010).
On the flip-side, their similarities can lead to confusion and small mistakes when you find yourself using both package managers.
Here is a useful reference to keep the two CLIs straight:
Command | npm | yarn |
---|---|---|
Install dependencies | npm install |
yarn |
Install package | npm install [package] |
yarn add [package] |
Install dev package | npm install --save-dev [package] |
yarn add --dev [package] |
Uninstall package | npm uninstall [package] |
yarn remove [package] |
Uninstall dev package | npm uninstall --save-dev [package] |
yarn remove [package] |
Update | npm update |
yarn upgrade |
Update package | npm update [package] |
yarn upgrade [package] |
Global install package | npm install --global [package] |
yarn global add [package] |
Global uninstall package | npm uninstall --global [package] |
yarn global remove [package] |
Here are some commands that Yarn decided not to change:
npm | yarn |
---|---|
npm init |
yarn init |
npm run |
yarn run |
npm test |
yarn test |
npm login (and logout ) |
yarn login (and logout ) |
npm link |
yarn link |
npm publish |
yarn publish |
npm cache clean |
yarn cache clean |
For a more comprehensive overview of npm, explore our tutorial How To Use Node.js Modules with npm and package.json .
]]>Node es un entorno de tiempo de ejecución que hace que sea posible escribir JavaScript en el lado del servidor. Ha logrado una amplia adopción desde su lanzamiento en 2011. Escribir JavaSccript en el servidor puede ser difícil a medida que la base de código crece debido a la naturaleza del lenguaje JavaScript: dinámico y con escritura débil.
Los desarrolladores que llegan a JavaScript desde otros lenguajes a menudo se quejan sobre su falta de escritura estática fuerte, pero aquí es donde entra TypeScript, para cerrar esta brecha.
TypeScript es un superconjunto escrito (opcional) de JavaScript que puede ayudar a la hora de crear y gestionar proyectos JavaScript a gran escala. Puede verse como JavaScript con funciones adicionales como escritura estática fuerte, compilación y programación orientada a objetos.
Nota: TypeScript es técnicamente un superconjunto de JavaScript, lo que significa que todo el código JavaScript es código TypeScript válido.
Aquí tiene algunos beneficios del uso de TypeScript:
En este tutorial, configurará un proyecto Node con TypeScript. Creará una aplicación Express usando TypeScript y la convertirá en un código JavaScript limpio y fiable.
Antes de comenzar con esta guía, necesitará Node.js instalado en su equipo. Puede hacerlo siguiendo la guía Cómo instalar Node.js y crear un entorno de desarrollo local para su sistema operativo.
Para comenzar, cree una nueva carpeta llamada node_project
y entre en ese directorio.
- mkdir node_project
- cd node_project
A continuación, inícielo como proyecto npm:
- npm init
Tras ejecutar npm init
, necesitará suministrar a npm información sobre su proyecto. Si prefiere que npm asuma los valores predeterminados más sensatos, puede añadir el indicador y
para omitir las solicitudes de información adicional:
- npm init -y
Ahora que el espacio de su proyecto está configurado, está listo para instalar las dependencias necesarias.
Con un proyecto npm vacío iniciado, el siguiente paso es instalar las dependencias que se necesitan para ejecutar TypeScript.
Ejecute los siguientes comandos desde el directorio de su proyecto para instalar las dependencias:
- npm install -D typescript@3.3.3
- npm install -D tslint@5.12.1
El indicador -D
es el atajo para: --save-dev
. Puede obtener más información sobre este indicador en la documentación de npmjs.
Ahora es el momento de instalar el marco Express:
- npm install -S express@4.16.4
- npm install -D @types/express@4.16.1
El segundo comando instala los tipos de Express para la compatibilidad con TypeScript. Los Tipos de TypeScript son archivos, normalmente con una extensión .d.ts
. Los archivos se usan para proporcionar información de tipo sobre una API, en este caso el marco Express.
Este paquete es necesario porque TypeScript y Express son paquetes independientes. Sin el paquete @typ/express
, no hay forma de que TypeScript sepa sobre los tipos de clases Express.
En esta sección, configurará TypeScript y configurará un linting para TypeScript. TypeScript utiliza un archivo llamado tsconfig.json
para configurar las opciones del compilador para un proyecto. Cree un archivo tsconfig.json
en la raíz del directorio del proyecto y péguelo en el siguiente fragmento de código:
{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist"
},
"lib": ["es2015"]
}
Vamos a repasar algunas de las claves del fragmento de código JSON anterior:
module
: Especifica el método de generación de código del módulo. Node utiliza commonjs
.target
: Especifica el nivel de lenguaje de salida.moduleResolution
: Esto ayuda al compilador a averiguar a qué se refiere una importación. El valor node
imita al mecanismo de resolución del módulo de Node.outDir
: Esta es la ubicación para los archivos .js
tras la transpilación. En este tutorial, lo guardará como dist
.Una alternativa a crear y popular manualmente el archivo tsconfig.json
es ejecutando el siguiente comando:
- tsc --init
Este comando generará un archivo tsconfig.json
bien redactado.
Para obtener más información sobre las opciones de valor clave disponibles, la documentación de TypeScript oficial ofrece explicaciones sobre cada opción.
Ahora puede configurar el linting de TypeScript para el proyecto. En un terminal que se ejecute en la raíz del directorio de su proyecto, que en este tutorial se estableció como node_project
, ejecute el siguiente comando para generar un archivo tslint.json
.
- ./node_modules/.bin/tslint --init
Abra el archivo tslint.json
recién generado y añada la regla no-console
como corresponda:
{
"defaultSeverity": "error",
"extends": ["tslint:recommended"],
"jsRules": {},
"rules": {
"no-console": false
},
"rulesDirectory": []
}
Por defecto, el linter de TypeScript evita el uso de depuración usando declaraciones console
, por tanto es necesario indicar explícitamente al linter que revoque la regla no-console
predeterminada.
package.json
En este momento del tutorial, puede ejecutar funciones en el terminal individualmente o crear un npm script para ejecutarlos.
En este paso, creará una secuencia de comandos start
que compilará y transpilará el código de TypeScript, y luego ejecutará la aplicación .js
resultante.
Abra el archivo package.json
y actualícelo:
{
"name": "node-with-ts",
"version": "1.0.0",
"description": "",
"main": "dist/app.js",
"scripts": {
"start": "tsc && node dist/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@types/express": "^4.16.1",
"tslint": "^5.12.1",
"typescript": "^3.3.3"
},
"dependencies": {
"express": "^4.16.4"
}
}
En el fragmento de código anterior, actualizó la ruta main
y añadió el comando start
a la sección scripts. Cuando se observa el comando start
, verá que primero se ejecuta el comando tsc
y luego el comando node
. Esto compilará y ejecutará el resultado generado con node
.
El comando tsc
indica a TypeScript que compile la aplicación y coloque el resultado .jso
generado en el directorio outDir
especificado como se establece en el archivo tsconfig.json
.
Ahora que TypeScript y su linter están configurados, es el momento de crear un servidor Node Express.
Primero cree una carpeta src
en la raíz del directorio de su proyecto:
- mkdir src
A continuación cree un archivo llamado app.ts
dentro:
- touch src/app.ts
En este momento, la estructura de la carpeta debería tener este aspecto:
├── node_modules/
├── src/
├── app.ts
├── package-lock.json
├── package.json
├── tsconfig.json
├── tslint.json
Abra el archivo app.ts
con el editor de texto que prefiera y pegue el siguiente fragmento de código:
import express from 'express';
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
El código anterior crea Node Server que escucha las solicitudes sobre el puerto 3000
. Ejecute la aplicación usando el siguiente comando:
- npm start
Si se ejecuta correctamente se registrará un mensaje en el terminal:
- Outputserver is listening on 3000
Ahora, puede visitar http://localhost:3000
en su navegador y debería ver el mensaje:
- OutputThe sedulous hyena ate the antelope!
Abra el archivo dist/app.js
y encontrará la versión transpilada del código TypeScript:
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const app = express_1.default();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
//# sourceMappingURL=app.js.map
En este momento ha configurado correctamente su proyecto Node para usar TypeScript.
En este tutorial, aprendió por qué TypeScript es útil para escribir un código JavaScript fiable. También aprendió algunos de los beneficios de trabajar con TypeScript.
Finalmente, ha configurado un proyecto Node usando el marco Express, pero compiló y ejecutó el proyecto usando TypeScript.
]]>Node — это среда исполнения, позволяющая писать серверный код JavaScript. Она получила очень широкое распространение после своего выпуска в 2011 году. С ростом базы кода написание серверного кода JavaScript может представлять сложности в связи с характером языка JavaScript: динамичным и слабо типизированным.
Разработчики, переходящие на JavaScript с других языков, часто жалуются на отсутствие мощного статического типирования, но TypeScript позволяет устранить этот недостаток.
TypeScript — это типовой (опциональный) супернабор JavaScript, который может помочь со сборкой и управлением крупномасштабными проектами JavaScript. Его можно представить как JavaScript с дополнительными возможностями, включая мощное статическое типирование, компиляцию и объектно-ориентированное программирование.
Примечание. С технической точки зрения TypeScript является супернабором JavaScript, и это означает, что весь код JavaScript является корректным кодом TypeScript.
Перечислим некоторые преимущества использования TypeScript:
В этом учебном модуле вы настроите проект Node с помощью TypeScript. Вы создадите приложение Express с помощью TypeScript и преобразуете его в компактный и надежный код JavaScript.
Перед началом прохождения этого модуля вам нужно будет установить Node.js на вашем компьютере. Для этого можно выполнить указания руководства Установка Node.js и создание локальной среды разработки для вашей операционной системы.
Для начала создайте новую папку с именем node_project
и перейдите в этот каталог.
- mkdir node_project
- cd node_project
Затем инициализируйте его как проект npm:
- npm init
После запуска npm init
вам нужно будет передать npm информацию о вашем проекте. Если вы разрешите npm принимать ощутимые значения по умолчанию, вы можете добавить флаг y
, чтобы пропустить диалоги с запросом дополнительной информации:
- npm init -y
Теперь пространство вашего проекта настроено, и вы можете перейти к установке необходимых зависимостей.
Следующий шаг после инициализации базового проекта npm — установить зависимости, требующиеся для запуска TypeScript.
Запустите следующие команды из каталога вашего проекта для установки зависимостей:
- npm install -D typescript@3.3.3
- npm install -D tslint@5.12.1
Флаг -D
— сокращенное обозначение опции: --save-dev
. Более подробную информацию об этом флаге можно найти в документации npmjs.
Пришло время установить платформу Express:
- npm install -S express@4.16.4
- npm install -D @types/express@4.16.1
Вторая команда устанавливает типы Express для поддержки TypeScript. Типы в TypeScript — это файлы, которые обычно имеют расширение .d.ts
. Файлы используются для предоставления типовой информации об API, в данном случае структуры Express.
Этот пакет требуется, потому что TypeScript и Express являются независимыми пакетами. Без пакета @types/express
у TypeScript нет способа узнавать типы классов Express.
В этом разделе мы настроим TypeScript и проверку соблюдения стандартов для TypeScript. TypeScript использует файл tsconfig.json
для настройки опций компилятора для проекта. Создайте файл tsconfig.json
в корне каталога проекта и вставьте следующий фрагмент кода:
{
"compilerOptions": {
"module": "commonjs",
"esModuleInterop": true,
"target": "es6",
"moduleResolution": "node",
"sourceMap": true,
"outDir": "dist"
},
"lib": ["es2015"]
}
Давайте рассмотрим некоторые ключи во фрагменте кода JSON выше:
module
: указывает метод генерирования кода модуля. Node использует commonjs
.target
: указывает уровень языка на выходе.moduleResolution
: помогает компилятору определить, на что ссылается импорт. Значение node
имитирует механизм разрешения модуля Node.outDir
: Это место для вывода файлов .js
после транспиляции. В этом учебном модуле мы сохраним его как dist
.В качестве альтернативы созданию и заполнения файла tsconfig.json
вручную можно запустить следующую команду:
- tsc --init
Эта команда сгенерирует файл tsconfig.json
с правильными комментариями.
Чтобы узнать больше о доступных опциях ключ-значение, можно использовать официальную документацию TypeScript, где приводятся разъяснения всех опций.
Теперь вы можете настроить проверку соответствия стандартам кода TypeScript для этого проекта. Откройте в терминале корневой каталог вашего проекта, который установлен в этом учебном модуле как node_project
, и запустите следующую команду для генерирования файла tslint.json
:
- ./node_modules/.bin/tslint --init
Откройте сгенерированный файл tslint.json
и добавьте соответствующее правило no-console
:
{
"defaultSeverity": "error",
"extends": ["tslint:recommended"],
"jsRules": {},
"rules": {
"no-console": false
},
"rulesDirectory": []
}
По умолчанию модуль проверки TypeScript предотвращает использование отладки через команды консоли
, поэтому нужно явно предписать ему отключить правило по умолчанию no-console
.
package.json
Сейчас вы можете запускать функции в терминале по отдельности или создать скрипт npm для их запуска.
На этом шаге мы создадим скрипт start
, который выполнит компиляцию и транспиляцию кода TypeScript, а затем запустит полученное приложение .js
.
Откройте файл package.json
и обновите его соответствующим образом:
{
"name": "node-with-ts",
"version": "1.0.0",
"description": "",
"main": "dist/app.js",
"scripts": {
"start": "tsc && node dist/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"devDependencies": {
"@types/express": "^4.16.1",
"tslint": "^5.12.1",
"typescript": "^3.3.3"
},
"dependencies": {
"express": "^4.16.4"
}
}
В приведенном выше фрагменте кода мы обновили путь main
и добавили команду start
в раздел scripts. Если посмотреть на команду start
, вы увидите, что вначале запускается команда tsc
, а затем — команда node
. При этом будет проведена компиляция, и сгенерированный вывод будет запущен с помощью node
.
Команда tsc
предписывает TypeScript скомпилировать приложение и поместить сгенерированный вывод .js
в указанном каталоге outDir
, как указано в файле tsconfig.json
.
Теперь TypeScript и модуль проверки настроены, и мы можем приступить к сборке модуля Node Express Server.
Вначале создайте папку src
в корневом каталоге вашего проекта:
- mkdir src
Затем создайте файл с именем app.ts
:
- touch src/app.ts
На этом этапе структура каталогов должна выглядеть следующим образом:
├── node_modules/
├── src/
├── app.ts
├── package-lock.json
├── package.json
├── tsconfig.json
├── tslint.json
Откройте файл app.ts
в предпочитаемом текстовом редакторе и вставьте следующий фрагмент кода:
import express from 'express';
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
Приведенный выше код создает сервер Node, прослушивающий порт 3000
на предмет запросов. Запустите приложение с помощью следующей команды:
- npm start
При успешном выполнении сообщение будет зарегистрировано на терминале:
- Outputserver is listening on 3000
Теперь вы можете открыть в браузере адрес http://localhost:3000
и увидите следующее сообщение:
- OutputThe sedulous hyena ate the antelope!
Откройте файл dist/app.js
, и вы найдете в нем транспилированную версию кода TypeScript:
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const app = express_1.default();
const port = 3000;
app.get('/', (req, res) => {
res.send('The sedulous hyena ate the antelope!');
});
app.listen(port, err => {
if (err) {
return console.error(err);
}
return console.log(`server is listening on ${port}`);
});
//# sourceMappingURL=app.js.map
Вы успешно настроили проект Node для использования TypeScript.
В этом учебном модуле вы узнали, почему TypeScript полезен для написания надежного кода JavaScript. Также вы узнали о некоторых преимуществах работы с TypeScript.
Наконец, вы настроили проект Node с использованием структуры Express, но скомпилировали и запустили проект с помощью TypeScript.
]]>