When you’re using AI to code, it’s easy to get caught up in the excitement of creating something new and innovative. However, it’s crucial to remember that security should always be a top priority. Without a “security-first” approach, you might inadvertently create vulnerabilities that hackers can exploit. This is especially important when working with APIs, as they can be vulnerable to rate limiting or even DDoS attacks. By prioritizing security from the start, you can ensure that your project is not only innovative but also safe and reliable.
We have all heard about at least one of these scenarios where:
In this tutorial you will learn how to tackle these security gaps by implementing a rate limiter using Valkey (a Redis-compatible database) on DigitalOcean’s managed database service.
A rate limiter is a tool that controls the amount of requests a user or client can make to your API within a specific time window. It acts as a gatekeeper, monitoring incoming traffic and enforcing predefined limits to prevent abuse.
This is where Valkey comes in - its in-memory data structure and atomic operations make it an ideal solution for implementing fast, reliable rate limiting.
We will build an API rate limiter using the Chuck Norris jokes API. When you visit our application, you can request a random Chuck Norris joke from the API, and you can do this up to 5 times within a 60-second window. Each time you request a joke, our system tracks your IP address and keeps count. If you try to get a sixth joke within that same minute, you’ll receive a “Rate limit exceeded. Please wait a minute” message instead (as shown in the image below). After the 60-second window expires, your counter resets, and you can start requesting jokes again.
Here’s how the application works: Rate.mp4
curl
).Before we get straight into writing code and building the rate limiter, let us understand how the requests are processed and how it works in the backend with this diagram:
Our rate limiter application is built with three main components working together:
Since Valkey will handle the logic part of the rate limiter, let’s get started with creating it first.
Create Database
.Valkey
as the database engine.Create Database Cluster
button to start creating the database.Overview
, Insights
, Logs
, and Settings
from the tabs.Now that we have our Valkey database set up on DigitalOcean, let’s build our Express.js backend that will connect to it and implement our rate limiter.
.env
file with your DigitalOcean Valkey credentials that you can get from the control panel:index.js
file and set up the Valkey connection:The tls: {}
configuration is crucial here as DigitalOcean’s Valkey requires a secure connection.
This middleware:
Next, we will create our rate-limited API endpoint (in this case it is the Chuck Norris joke API):
And that’s it! The backend is now ready to handle requests with rate limiting. In the next section, we’ll build the front end to interact with this API.
Now that we have our backend running, the next part is to create a frontend to see it working visually. In this section we’ll focus on the core integration with our backend API, you can find the entire code on GitHub.
Here’s our main page component that handles the communication with our rate-limited backend:
When a user clicks the “Get a Joke” button, our front end sends a request to the backend API, which checks the rate limit in Valkey - if the user is within their 5-requests-per-minute limit, they get a Chuck Norris joke; if they’ve exceeded the limit, they receive a friendly “too many requests” message. The front end handles all responses, showing loading states during requests and displaying appropriate error messages if something goes wrong, while managing the application state to keep track of the current joke, any errors, and the loading status.
Valkey support is added on DigitalOcean to replace the existing Managed Caching service. Let’s see how it has helped us in building the rate-limiter:
Valkey is based on a fork of Redis, but with significant improvements:
In our rate limiter, we benefit from these features through our connection setup:
Valkey’s multi-threading capabilities shine in our rate limiting implementation. When multiple users hit our API simultaneously, Valkey can handle the concurrent requests efficiently:
This Lua script runs atomically, and Valkey’s improved threading ensures it can process many of these operations concurrently without performance degradation.
Valkey’s new dictionary structure helps our rate limiter be more memory-efficient. Each rate limit counter is stored with minimal overhead:
The combination of efficient memory usage and automatic expiration (TTL) means our rate limiter can handle millions of unique IP addresses without consuming excessive memory.
Valkey’s improved observability features help us monitor our rate limiter’s performance. We can track:
This is visible in our logging:
For users currently using Managed Caching, Valkey provides a smooth migration path. Our rate limiter implementation is compatible with both services, making it easy to transition:
All of these together make Valkey the ideal choice for implementing scalable, efficient rate limiting in production environments.
Valkey is a cloud-based service that provides a scalable and efficient way to implement rate limiting. It works by using a combination of atomic operations and a new dictionary structure to store rate limit counters, allowing for millions of unique IP addresses to be handled without consuming excessive memory.
Valkey helps implement a scalable rate limiter by providing a cloud-based service that can handle a large number of requests without performance degradation. Its atomic operations ensure that rate limit checks are accurate and efficient, even in high-traffic environments. This is achieved through a combination of efficient memory usage and automatic expiration (TTL), allowing for millions of unique IP addresses to be handled without consuming excessive memory.
The benefits of using Valkey for rate limiting are numerous and significant. Firstly, Valkey’s cloud-based service offers improved scalability, allowing it to handle a large number of requests without performance degradation, making it ideal for high-traffic environments. Additionally, Valkey’s memory efficiency ensures that rate limit counters are stored with minimal overhead, enabling the handling of millions of unique IP addresses without consuming excessive memory. Furthermore, Valkey’s observability features provide valuable insights into the performance of the rate limiter, including tracking the number of rate-limited requests, memory usage per rate limit counter, and latency of rate limit checks. This allows for better monitoring and optimization of the rate limiter. For more information on Valkey’s features and how they can benefit your application, refer to the Valkey documentation.
Yes, Valkey can be used for both Managed Caching and Valkey services. Its implementation is designed to be compatible with both services, ensuring a seamless transition between the two. This compatibility allows users to leverage Valkey’s features and benefits without worrying about the underlying service infrastructure. Whether you’re currently using Managed Caching or planning to migrate to Valkey, the rate limiter implementation remains consistent and effective, making it an ideal choice for managing traffic and ensuring a smooth user experience.
Valkey ensures memory efficiency in rate limiting by using a new dictionary structure to store rate limit counters. This structure has minimal overhead, allowing for millions of unique IP addresses to be handled without consuming excessive memory.
Valkey’s observability features for rate limiting are designed to provide a comprehensive understanding of the rate limiter’s performance. These features include:
These observability features provide valuable insights into the performance of the rate limiter, enabling you to make data-driven decisions to optimize your application’s traffic management. For more information on Valkey’s observability features, refer to the Valkey documentation.
In this tutorial, we’ve built an API rate limiter using Valkey on DigitalOcean. This shows how to protect your APIs from abuse while maintaining a good user experience. You have learnt how to implement rate limiting using Valkey’s atomic operations and how to integrate all of these in a simple frontend.
Continue building with DigitalOcean Gen AI Platform.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!