Essential Node.js backend examples for developers in 2024

Written by:

June 12, 2024

0 mins read

Node.js backend development continues to stand out in 2024 as a powerful and flexible runtime for building scalable and efficient applications, even more so with the rise of other runtimes such as Bun.

In this article, I wanted to provide a lightweight introduction to essential Node.js backend examples that demonstrate the effective use of advanced JavaScript and Node.js features. From harnessing the WHATWG Streams Standard and Web Streams API for efficient data handling to employing the built-in Node.js crypto module for security, working with Buffers for binary data manipulation, leveraging Symbols for encapsulation and namespacing, and utilizing template literals and tagged templates for generating dynamic HTML and SQL queries — each section provides practical, real-world code snippets and insights.

These examples not only showcase the versatility and strength of Node.js in solving backend challenges but also serve as a valuable reference for developers looking to elevate their backend solutions in 2024.

Chapters in this article:

  1. The WHATWG Streams Standard, Web Streams API, and async iterables

  2. Working with the Crypto module to validate webhook signatures

  3. Working with buffers and raw binary data in Node.js

  4. Using JavaScript Symbol for encapsulation

  5. Template literals and tagged templates to generate HTML and SQL queries

You can always catch up with the full source-code Node.js backend examples in this GitHub repo.

1. The WHATWG Streams Standard, Web Streams API, and Async Iterables

In 2024, the prevalence of working with streams has significantly increased, especially when dealing with large language models for General Artificial Intelligence (GenAI). The OpenAI SDK for chat completion serves as a prime example of this trend. A typical example of streaming data from the OpenAI SDK API would look like this:

1const completion = await;
3for await (const chunk of completion) {
4  console.log(chunk);

Here, we use the concept of async iterables, which has simplified asynchronous workflows in Node.js to match the promise-based programming style. The `for await…` statement creates a loop iterating over async iterable objects as well as sync iterables, including built-in String, Array, Array-like objects (e.g., arguments or NodeList), TypedArray, Map, Set, and user-defined async/sync iterables.

Introducing WHATWG Standard Web Streams in Node.js

Web Streams, a part of the WHATWG Streams standard, have been integrated into Node.js, and they provide a robust way of handling streaming data. This standard allows developers to efficiently read, write, and transform streaming data using JavaScript.

Let's look at a complete example of how to use Web Streams in Node.js.

In the following example, we create a function `createReadableStreamFromFile()` that uses the `ReadableStream` class from the Web Streams API to create a stream of data from a file. A second function, `consumeStreamWithAsyncIterator()`, then consumes this stream using an Async Iterator.

1import { ReadableStream } from "node:stream/web";
2import fs from "fs";
4function createReadableStreamFromFile(filePath) {
5  const stream = new ReadableStream({
6    start(controller) {
7      const reader = fs.createReadStream(filePath);
9      reader.on("data", (chunk) => {
10        controller.enqueue(chunk);
11        if (reader.readableFlowing === false) {
12          reader.resume();
13        }
14      });
16      reader.on("end", () => {
17        controller.close();
18      });
20      reader.on("error", (err) => {
21        controller.error(err);
22      });
23    },
24  });
26  return stream;
29async function consumeStreamWithAsyncIterator(stream) {
30  try {
31    for await (const chunk of stream) {
32      process.stdout.write(chunk);
33    }
34  } catch (err) {
35    console.error("Error occurred while reading the stream:", err);
36  }
39const filePath = process.argv[2];
40const stream = createReadableStreamFromFile(filePath);

In this code, we're using the Web Streams API to read data from a file in a streaming manner. The function `createReadableStreamFromFile()` returns a `ReadableStream` object from a given file path. This stream is then consumed by `consumeStreamWithAsyncIterator()`, which reads the stream chunk by chunk and writes each chunk to the standard output. If an error occurs during the reading process, it's caught and logged to the console.

By the way, did you notice the security vulnerability in the code above?

If you had the Snyk VS Code extension installed, then you’d get a wiggly linter error showing you that there’s a path traversal vulnerability in the code example.

The Snyk VS Code extension would show you how insecure data can flow into this Node.js backend code example, what this security vulnerability is about, and how to fix it with AI-curated suggestions from live open source projects.

The Snyk VS Code extension finds a security vulnerability in my Node.js backend example code.

2. Working with the Crypto module to validate webhook signatures

Webhooks have become an integral part of modern backend development, providing a way for different services to communicate with each other in an efficient and real-time manner. However, with the increased use of webhooks comes the need for stronger security measures, one of which is the validation of webhook signatures.

In the realm of webhooks, a signature is a hash that is sent along with the webhook payload, which is calculated using a secret key known only to the sender and the recipient. The recipient can then calculate the hash on their end and compare it with the signature to verify the authenticity of the webhook.

The Node.js Crypto module provides a host of cryptographic functionality, including a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. HMAC (Hash-based Message Authentication Code) is particularly useful for validating webhook signatures.

Let's break down the provided code snippet to understand how it works:

1const crypto = require("node:crypto");
3const secret = process.env.WEBHOOK_SECERT;
4const hmac = crypto.createHmac("sha256", secret);
5const digest = Buffer.from(
6  hmac.update(request.rawBody).digest("hex"),
7  "utf8"
9const signature = Buffer.from(request.headers["x-signature"] || "", "utf8");
11if (!crypto.timingSafeEqual(digest, signature)) {
12  throw new Error("Invalid signature.");

In this snippet, the `crypto.createHmac` method is used to create an HMAC object. This method takes two parameters — the algorithm to be used (in this case, `sha256`) and the secret key.

The HMAC object is then updated with the raw body of the webhook request using the `hmac.update` method. This method can be called multiple times with new data as it is streamed.

The `digest` method is then used to generate the hash. This method can only be called once on the HMAC object, and it returns the calculated hash. The `hex` parameter instructs the method to return the hash in hexadecimal format.

Now, we introduce the concept of secure string comparison. The calculated hash (digest) and the received signature from the webhook request header are then compared using the `crypto.timingSafeEqual` method. This method performs a timing-attack safe equality comparison between two buffers, making it ideal for comparing cryptographic outputs.

Timing attacks are a type of side-channel attack where an attacker tries to compromise a system by analyzing the time taken to execute cryptographic algorithms. By using a timing-safe method like `crypto.timingSafeEqual`, we protect against these types of attacks. If the digest and signature do not match, an error is thrown, indicating that the webhook request may not be authentic.

3. Working with buffers and raw binary data in Node.js

The Buffer API in Node.js is a powerful tool that allows developers to work directly with binary data. Whether you need to read a file, analyze an image, or process raw data, the Buffer API provides methods to handle such tasks efficiently.

Let's explore some of the common Buffer API methods in Node.js, such as `.from`, `.alloc`, and `.write`, which allow for the creation and manipulation of buffer objects. The `.from` method creates a new buffer using the data passed in as an argument, `.alloc` creates a new buffer of a specified size, and `.write` allows you to write data to a buffer. Another one is the `.concat` method, which is used to concatenate a list of Buffer instances.

1let buffer1 = Buffer.from('Hello, ');
2let buffer2 = Buffer.from('World!');
3let buffer3 = Buffer.concat([buffer1, buffer2]);
6// prints: 'Hello, World!'

Analyzing an image with OpenAI API using Node.js Buffer API

In the next Node.js backend example code, we are using the Buffer API to read an image file and analyze it using the OpenAI API. Firstly, we import the necessary modules and create an instance of the OpenAI API client:

1import { readFile } from "node:fs/promises";
2import OpenAI from "openai";
4const openai = new OpenAI();

Next, we read the image file into a buffer:

const imageBuffer = await readFileToBuffer(process.argv[2]);

We then validate the image type by checking the file signature against a known PNG image type signature:

1function isImageTypeValid(imageBuffer) {
2  const pngSignature = Buffer.from([
3    0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a,
4  ]);
5  const fileSignature = imageBuffer.slice(0, 8);
7  if (pngSignature.equals(fileSignature)) {
8    return true;
9  }

Finally, we generate a descriptive alt text for the image using the OpenAI API:

1async function generateAltTextForImage(imageBuffer) {
2  const imageInBase64 = imageBuffer.toString("base64");
4  const response = await{
5    model: "gpt-4-vision-preview",
6    messages: [
7      {
8        role: "user",
9        content: [
10          {
11            type: "text",
12            text: "What's in this image? generate a simple alt text for an image source in an HTML page",
13          },
14          {
15            type: "image_url",
16            image_url: {
17              url: `data:image/png;base64,${imageInBase64}`,
18            },
19          },
20        ],
21      },
22    ],
23  });
25  return response.choices[0];

Security considerations when using Buffer API

When working with the Buffer API, there are several security considerations to keep in mind. Improper handling of binary data can lead to potential security vulnerabilities such as buffer overflow or underflow errors. Always make sure to validate the input and properly handle the errors.

Also, be mindful of potential security vulnerabilities, such as the `Buffer` constructor, which is now deprecated and should be avoided in favor of safer alternatives like `Buffer.from` or `Buffer.alloc`.

4. Using JavaScript Symbol for encapsulation

Symbols are a primitive data type introduced in ES6 (ECMAScript 2015) that represent unique and immutable identifiers. They are created with the `Symbol()` function, which optionally accepts a description (a string) that can be used for debugging but does not affect the uniqueness of the symbol. Symbols are primarily used to create unique property keys for objects that do not collide with any other property, including those inherited. This makes them particularly useful for defining private or special properties of objects without risking property name collisions.

Here's a Node.js backend code example of how to use symbols to create a private property in a class as a way to encapsulate data:

1const _privateProperty = Symbol('privateProperty');
3class MyClass {
4  constructor(value) {
5    this[_privateProperty] = value;
6  }
8  getPrivateProperty() {
9    return this[_privateProperty];
10  }
13const instance = new MyClass('secret');
15// Will output 'secret'
16// The _privateProperty cannot be directly accessed from outside the class

In addition, Symbols are not accessible through object property enumeration (like `` loops or `Object.keys()`), so in a sense, they can be used to simulate private properties and methods for objects, but it's important to note that they are not truly private and can still be accessed using reflection methods like `Object.getOwnPropertySymbols()`.

JavaScript defines a set of well-known symbols that represent internal language behaviors that can be customized by developers. For example, implementing an iterator for a custom object using `Symbol.iterator`:

1const iterable = {
2  [Symbol.iterator]: function* () {
3    yield 1;
4    yield 2;
5  }
8for (const value of iterable) {
9  // Logs 1 and 2
10  console.log(value);

Fastify’s use of JavaScript's Symbol

Let's look at a more real-world example with the Fastify web application framework and how the project uses Symbols. Specifically, we're going to look at an example from Fastify's plugin architecture. One of the key aspects of Fastify's design is its encapsulation feature, which allows developers to create isolated application contexts using plugins. This is crucial for building large-scale applications where namespace collisions can become a problem.

Fastify uses Symbols to uniquely identify internal properties and methods, ensuring that these do not interfere with user-defined properties or those from other plugins. Here is a simplified example based on the actual use of Symbols in Fastify's source code for encapsulating the plugin's metadata:

1const pluginMeta = Symbol('fastify.pluginMeta');
3function registerPlugin(instance, plugin, options) {
4  if (!plugin[pluginMeta]) {
5    plugin[pluginMeta] = { options, name: };
6  }
8  instance.register(plugin, options);
11function myPlugin(instance, opts, done) {
12  done();
15registerPlugin(fastifyInstance, myPlugin, { prefix: '/api' });

In the above, `pluginMeta` is a Symbol used by Fastify to attach metadata to plugin functions. This metadata includes the plugin's options, name, and potentially other necessary information for the framework's internal use. The `registerPlugin` function simplifies the process of attaching metadata to a plugin before registering it with a Fastify instance. This metadata is then accessible within the Fastify framework but remains isolated from the plugin's public interface and the application's global scope.

The benefits of using JavaScript's Symbol for internal metadata like this has several advantages in a framework like Fastify:

  • Encapsulation: It prevents internal details from leaking into the user space, keeping the public API clean and intuitive.

  • Safety: It reduces the risk of accidental interference between plugins or between a plugin and the core framework, as Symbols are not accessible through normal object property enumeration.

  • Clarity: It clearly distinguishes between the framework's internal mechanisms and the APIs exposed to developers, making the Fastify codebase easier to maintain and extend.

5. Template literals and tagged templates to generate HTML and SQL queries

Introduced in ES6, template literals offer a more readable and concise syntax for creating strings in JavaScript, and you might have been using template literals already in JavaScript to write strings and integrate dynamic expressions — such as `hello ${name}`. No more string concatenation.

We're now seeing a growing trend of using template literals in the form of tagged templates to generate HTML and SQL queries in Node.js backends. Here are two real-world examples in Node.js backend and SSR code:

Tagged templates for generating SQL queries with Vercel's PostgreSQL library

Dealing with SQL queries in Node.js can often lead to verbose and error-prone code, especially when dynamically inserting values into queries. Vercel's PostgreSQL package (`@vercel/postgres`) introduces a safer and more concise way to format SQL queries using template literals:

1import { sql } from '@vercel/postgres';
3const jediName = 'Luke Skywalker';
4const { rows } = await sql`SELECT * FROM jedis WHERE name = ${jediName};`;

The SQL-tagged template literal function safely interpolates the `jediName` variable into the SQL query, effectively preventing SQL injection attacks.

Tagged templates for generating HTML on Fastify servers

Similar to the SQL query, the `fastify-html` is a Fastify plugin that allows developers to use tagged templates to generate HTML content directly in route handlers. This can be particularly useful for server-side rendering (SSR) or generating dynamic HTML content (did someone say htmx?):

1import fastify from 'fastify'
2import fastifyHtml from 'fastify-html'
4const app = fastify()
5await app.register(fastifyHtml)
7app.get('/', async (req, reply) => {
8  const name = || 'World';
9  return reply.html`<h1>Hello ${name}</h1>`;

What are tagged template literals?

Building on the concept of template literals, tagged template literals are a feature of JavaScript that allows you to define a custom function (the "tag") to process template literals. This function receives the template literal as an array of strings and the interpolated expressions as separate arguments, allowing you to manipulate the template and its values before the final string is created.

Closing up

If you liked this article, you might also want to check out best practices for creating a modern npm package with security in mind and 10 best practices to containerize Node.js web applications with Docker.

And if you're into security stories, you’ll want to make sure you're well-equipped to combat supply chain attacks on npm with developer security practices that I mentioned in the article.

Lastly, don't forget to check out Snyk with a free account to start securing your Node.js code, dependencies, and Docker container images.

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales