By Hugo Vaz, Anish Singh Walia and Vinayak Baranwal

MQTT brokers connect IoT devices and applications through a publish-subscribe messaging pattern, making them essential for modern IoT infrastructure. Coreflux is a low-code MQTT broker that adds real-time data processing and transformation capabilities, allowing you to integrate directly with DigitalOcean managed databases including MongoDB, PostgreSQL, MySQL, and OpenSearch without writing custom integration code.
What you’ll learn: This tutorial walks you through deploying a complete IoT data pipeline—from setting up a managed database cluster and Coreflux MQTT broker on DigitalOcean, to configuring secure VPC networking, building data transformation models using Coreflux’s Language of Things (LoT), and automatically storing processed IoT data in your chosen database. You’ll end up with a production-ready setup that handles real-time messaging and persistent storage for IoT applications.
Before diving into the step-by-step deployment process, here are the key points you’ll learn:
This tutorial provides a production-ready foundation for IoT applications that need real-time messaging combined with persistent data storage and advanced capabilities like search, analytics, or relational queries.
By the end of this automation guide, you will have deployed:
Coreflux provides a Lightweight MQTT Broker and Data Pipeline tools through Language-of-Things programming language for Efficient IoT Communication on DigitalOcean.
MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe network protocol widely adopted in IoT ecosystems. Designed for constrained devices and low-bandwidth, high-latency, or unreliable networks, MQTT enables efficient, real-time messaging in bandwidth-constrained environments.
Coreflux offers a lightweight MQTT broker to facilitate efficient, real-time communication between IoT devices and applications, including real-time data transformation capabilities necessary for each use-case. Built for scalability and reliability, Coreflux is tailored for environments where low latency and high throughput are critical.
Coreflux handles message routing and data flow between devices, whether you’re building a small IoT project or deploying an industrial monitoring system.
With Coreflux on DigitalOcean, you get:
Data Processing: Centralization of your data processing needs where your data lives, ensuring real-time data processing.
Data Integration: Easily integrate with other DigitalOcean services like Managed Databases (PostgreSQL, MongoDB, MySQL, or OpenSearch), ensuring a single and simple ecosystem for all your data needs.
Scalability: Easily handle growing amounts of data and devices without compromising performance.
Reliability: Ensure consistent and dependable messaging across all connected devices.

Before you begin this MQTT broker deployment tutorial, you’ll need:
Estimated time: This tutorial takes approximately 30-45 minutes to complete, depending on database provisioning time (typically 1-5 minutes per database cluster).
First, you’ll create a Virtual Private Cloud (VPC) to ensure secure communication between your IoT services and MQTT broker, without the need for public access.

Configure your VPC for IoT automation:
Click Create VPC Network
The VPC will provide isolated networking for all your IoT resources, ensuring secure communication between the Coreflux MQTT broker and managed databases. For more details on VPC configuration, see our guide on creating VPC networks.
Choose one of the following database options based on your IoT application requirements:
Managed PostgreSQL on DigitalOcean is a good fit when your IoT workloads need relational schemas, strong consistency, and advanced SQL analytics, backed by automated backups, monitoring, and maintenance.

From the DigitalOcean control panel, navigate to Databases
Click Create Database Cluster
Configure your PostgreSQL cluster for IoT automation:
Choose your plan based on your IoT requirements:
Click Create Database Cluster
The managed database creation process typically takes 1-5 minutes. Once complete, you’ll be redirected to the database overview page, where you can see the connection details and perform administrative actions.
You’ll be prompted with Getting Started steps, where your connection details are shown and you can configure the inbound access rules (recommended to limit to your IP and VPC-only).

For connection details, you’ll be able to see two options - Public Network and VPC Network. The first is for external access for tools like DBeaver, while the second will be used by the Coreflux service to access the database.

You can test the PostgreSQL connection using DBeaver with the provided connection parameters, using public access credentials:

For better security and organization, create a dedicated user and database for your IoT automation application. This can also be done through DBeaver or CLI, but DigitalOcean provides a user-friendly approach:
Note: You may need to change the user permissions within the database to be able to create tables, insert and select data. For PostgreSQL, grant the necessary privileges using GRANT CREATE, INSERT, SELECT ON DATABASE coreflux-broker-data TO coreflux-broker-client;. For MySQL, use GRANT CREATE, INSERT, SELECT ON coreflux-broker-data.* TO 'coreflux-broker-client'@'%';. See our PostgreSQL quickstart and MySQL documentation for more details.
Managed MySQL on DigitalOcean is ideal for structured, transactional IoT data where you want familiar SQL, broad ecosystem support, and a fully managed service handling backups, updates, and monitoring.

From the DigitalOcean control panel, navigate to Databases
Click Create Database Cluster
Configure your MySQL cluster for IoT automation:
Choose your plan based on your IoT requirements:
Click Create Database Cluster
The managed database creation process typically takes 1-5 minutes. Once complete, you’ll be redirected to the database overview page, where you can see the connection details and perform administrative actions.
You’ll be prompted with Getting Started steps, where your connection details are shown and you can configure the inbound access rules (recommended to limit to your IP and VPC-only).

For connection details, you’ll be able to see two options - Public Network and VPC Network. The first is for external access for tools like DBeaver, while the second will be used by the Coreflux service to access the database.

You can test the MySQL connection using DBeaver with the provided connection parameters, using public access credentials.
Note: You may need to change DBeaver’s driver settings — set allowPublicKeyRetrieval = true.

For better security and organization, create a dedicated user and database for your IoT automation application. This can also be done through DBeaver or CLI, but DigitalOcean provides a user-friendly approach:
Managed MongoDB on DigitalOcean is well-suited to flexible or evolving IoT payloads, letting you store heterogeneous sensor documents without rigid schemas, while the platform handles replication, backups, and monitoring.

From the DigitalOcean control panel, navigate to Databases
Click Create Database Cluster
Configure your MongoDB cluster for IoT automation:
Choose your plan based on your IoT requirements:
Click Create Database Cluster
The managed database creation process typically takes 1-5 minutes. Once complete, you’ll be redirected to the database overview page, where you can see the connection details and perform administrative actions.
You’ll be prompted with Getting Started steps, where your connection details are shown and you can configure the inbound access rules (recommended to limit to your IP and VPC-only).

For connection details, you’ll be able to see two options: Public Network and VPC Network. The first is for external access for tools like MongoDB Compass, while the second will be used by the Coreflux service to access the database.

You can test the MongoDB connection using MongoDB Compass or the provided connection string, using public access credentials:
mongodb://username:password@mongodb-host:27017/defaultauthdb?ssl=true

For better security and organization, create a dedicated user and database for your IoT automation application. This can also be done through MongoDB Compass or CLI, but DigitalOcean provides a user-friendly approach:
Managed OpenSearch on DigitalOcean is designed for search, log analytics, and time-series dashboards over high-volume IoT data, with the service managing cluster health, scaling, and index storage for you.

From the DigitalOcean control panel, navigate to Databases
Click Create Database Cluster
Configure your OpenSearch cluster for IoT automation:
Choose your plan based on your IoT requirements:
Click Create Database Cluster
The managed database creation process typically takes 1-5 minutes. Once complete, you’ll be redirected to the database overview page, where you can see the connection details and perform administrative actions.
You’ll be prompted with Getting Started steps, where your connection details are shown and you can configure the inbound access rules (recommended to limit to your IP and VPC-only).

For connection details, you’ll be able to see two options: Public Network and VPC Network. The first is for external access for tools, while the second will be used by the Coreflux service to access the database. You’ll also see the URL and parameters to access the OpenSearch Dashboards.

You can test the OpenSearch connection using the OpenSearch Dashboards with the provided credentials:


Configure your droplet for MQTT broker deployment:

Choose Size for your IoT workload:
Choose Authentication Method:
Finalize Details:
Click Create Droplet
See the Droplet Home Page and wait for it to finish deploying

Using the same approach as for Coreflux Droplet, select Docker as the Marketplace image.
Once your droplet is running, connect to it via SSH with the defined authentication method or the web console available in the Droplet home page:
ssh root@your-droplet-ip

Run the Coreflux MQTT broker using Docker:
docker run -d \
--name coreflux \
-p 1883:1883 \
-p 1884:1884 \
-p 5000:5000 \
-p 443:443 \
coreflux/coreflux-mqtt-broker-t:1.6.3
This Docker command:
Verify the MQTT broker is running:
docker ps
You should see a container running:

You can access the MQTT broker through a MQTT client like MQTT Explorer to validate the access to the broker, regardless of the approach taken to deploy it.

For production IoT automation deployments, configure firewall rules to restrict access:
Navigate to Networking → Firewalls
Click Create Firewall
Configure inbound rules for MQTT broker security:
Apply the firewall to your droplet
For detailed firewall configuration, refer to DigitalOcean’s firewall quickstart guide. Production tip: Restrict MQTT port 1883 to specific source IPs or VPC ranges only, and prefer port 1884 (MQTT with TLS) for external device connections. Consider using DigitalOcean App Platform with private networking if you need additional security layers.
The LoT (Language of Things) Notebook extension for Visual Studio Code provides an integrated low code development environment for MQTT broker programming and IoT automation. Learn more about Coreflux’s Language of Things (LoT) for low-code IoT automation.

Configure the connection to your Coreflux MQTT broker, using default credentials, when prompted on the top bar or by clicking the MQTT button on the bottom bar on the left:
Assuming no errors, you’ll see the status of the MQTT connectivity to the broker in the bottom bar, on the left.

For this use-case, we will build an integration of raw-data, through a transformation pipeline, into a Database. However, as we are not connected to any MQTT devices in the demo, we will take advantage of LoT’s capabilities and use an Action to simulate device data.
In LoT, an Action is an executable logic that is triggered by specific events such as timed intervals, topic updates, or explicit calls from other actions or system components. Actions allow dynamic interaction with MQTT topics, internal variables, and payloads, facilitating complex IoT automation workflows.
As such, we can use an Action that generates data in certain topics in a defined time interval, that can then be used by the rest of the pipeline we will define below.
You can download the github repo with the sample project.
Create an Action to generate simulated sensor data using the low code LoT (Language of Things) interface:
DEFINE ACTION RANDOMIZEMachineData
ON EVERY 10 SECONDS DO
PUBLISH TOPIC "raw_data/machine1" WITH RANDOM BETWEEN 0 AND 10
PUBLISH TOPIC "raw_data/station2" WITH RANDOM BETWEEN 0 AND 60
In the Notebook provided you also have an Action that does an incremental counter to simulate data, as an alternative to the provided Action.

When you run this Action, it will:
Models in Coreflux are used to transform, aggregate, and compute values from input MQTT topics, publishing the results to new topics. They serve the foundation for the creation of the UNS - Unified Namespace - of your system, applicable to your several data sources.
This way, a Model allows you to define how raw IoT data should be structured and transformed, both for a single device or for multiple devices simultaneously (through the use of the wildcard +). A model also serves as the key data schema used for scalable storage to the managed database.
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "production_count" WITH (IF production_status EQUALS "active" THEN (production_count + 1) ELSE 0)
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
This low code model:
As we generated two simulated sensors/machines with the Action, we can see the Model structure being applied automatically to both, generating both a json object and the individual topics.

Choose the database integration section that matches your selected database from Step 2.
In this section, you’ll learn how to store your processed IoT data in a PostgreSQL managed database on DigitalOcean.
To store your processed IoT data in a PostgreSQL managed database, you’ll define a Route in Coreflux. Routes specify how data is sent from your MQTT broker to your PostgreSQL cluster using a simple, low-code configuration:
DEFINE ROUTE PostgreSQL_Log WITH TYPE POSTGRESQL
ADD SQL_CONFIG
WITH SERVER "db-postgresql.db.onmyserver.com"
WITH PORT 25060
WITH DATABASE "defaultdb"
WITH USERNAME "doadmin"
WITH PASSWORD "AVNS_pass"
WITH USE_SSL TRUE
WITH TRUST_SERVER_CERTIFICATE FALSE
Replace with your PostgreSQL connection details from DigitalOcean and run the Route in your LoT Notebook. Important: Use the VPC connection details (not public) for better security and lower latency. The VPC hostname and port differ from the public connection string—check your database cluster’s connection details page for both options.
Modify your LoT model to use the database route for scalable storage, by adding this to the end of the Model:
STORE IN "PostgreSQL_Log"
WITH TABLE "MachineProductionData"
Additionally, add a parameter with the topic, to have a unique identifier for each entry in your managed database.
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "device_name" WITH REPLACE "+" WITH TOPIC POSITION 2 IN "+"
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "production_count" WITH (IF production_status EQUALS "active" THEN (production_count + 1) ELSE 0)
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
STORE IN "PostgreSQL_Log"
WITH TABLE "MachineProductionData"
After you deploy this updated action, all data should be automatically stored in the database when updated.
MySQL is a widely used relational database management system, making it an excellent choice for storing and analyzing IoT data at scale. In this section, you’ll learn how to connect your Coreflux MQTT broker to a managed MySQL database on DigitalOcean, so your real-time device data is securely and reliably persisted for analytics, reporting, or integration with other applications.
To enable this integration, you must define a Route in Coreflux’s LoT (Language of Things) that instructs where and how the processed data should be sent. Below is the required low-code format for routing data to a MySQL database. Be sure to substitute your own connection details as needed:
DEFINE ROUTE MySQL_Log WITH TYPE MYSQL
ADD SQL_CONFIG
WITH SERVER "db-mysql.db.onmyserver.com"
WITH PORT 25060
WITH DATABASE "defaultdb"
WITH USERNAME "doadmin"
WITH PASSWORD "AVNS_pass"
WITH USE_SSL TRUE
WITH TRUST_SERVER_CERTIFICATE FALSE
Replace with your MySQL connection details from DigitalOcean and run the Route in your LoT Notebook. Important: Use the VPC connection details (not public) for better security and lower latency. If you encounter connection issues, verify that TRUST_SERVER_CERTIFICATE is set correctly for your MySQL version—some versions require TRUE while others work with FALSE.
Modify your LoT model to use the database route for scalable storage, by adding this to the end of the Model:
STORE IN "MySQL_Log"
WITH TABLE "MachineProductionData"
Additionally, add a parameter with the topic, to have a unique identifier for each entry in your managed database.
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "device_name" WITH REPLACE "+" WITH TOPIC POSITION 2 IN "+"
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "production_count" WITH (IF production_status EQUALS "active" THEN (production_count + 1) ELSE 0)
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
STORE IN "MySQL_Log"
WITH TABLE "MachineProductionData"
After you deploy this updated action, all data should be automatically stored in the database when updated.
MongoDB is a NoSQL database that is well-suited for storing and querying IoT data with flexible schemas. In this section, you’ll learn how to connect your Coreflux MQTT broker to a managed MongoDB database on DigitalOcean, so your real-time device data is securely and reliably persisted for analytics, reporting, or integration with other applications.
To enable this integration, you must define a Route in Coreflux’s LoT (Language of Things) that instructs where and how the processed data should be sent. Below is the required low-code format for routing data to a MongoDB database. Be sure to substitute your own connection details as needed:
DEFINE ROUTE mongo_route WITH TYPE MONGODB
ADD MONGODB_CONFIG
WITH CONNECTION_STRING "mongodb+srv://<username>:<password>@<cluster-uri>/<database>?tls=true&authSource=admin&replicaSet=<replica-set>"
WITH DATABASE "admin"
Replace with your MongoDB connection details from DigitalOcean and run the Route in your LoT Notebook. Important: Use the VPC connection string format when available. The connection string should include tls=true and authSource=admin parameters. For troubleshooting MongoDB connections, see our guide on connecting to MongoDB.
Modify your LoT model to use the database route for scalable storage, by adding this to the end of the Model:
STORE IN "mongo_route"
WITH TABLE "MachineProductionData"
Additionally, add a parameter with the topic, to have a unique identifier for each entry in your managed database.
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "device_name" WITH REPLACE "+" WITH TOPIC POSITION 2 IN "+"
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "production_count" WITH (IF production_status EQUALS "active" THEN (production_count + 1) ELSE 0)
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
STORE IN "mongo_route"
WITH TABLE "MachineProductionData"
After you deploy this updated action, all data should be automatically stored in the database when updated.
OpenSearch is a distributed search and analytics engine designed for large-scale data processing and real-time analytics. In this section, you’ll learn how to connect your Coreflux MQTT broker to a managed OpenSearch database on DigitalOcean, so your real-time device data is securely and reliably persisted for analytics, reporting, or integration with other applications.
To enable this integration, you must define a Route in Coreflux’s LoT (Language of Things) that instructs where and how the processed data should be sent. Below is the required low-code format for routing data to a OpenSearch database. Be sure to substitute your own connection details as needed:
DEFINE ROUTE OpenSearch_log WITH TYPE OPENSEARCH
ADD OPENSEARCH_CONFIG
WITH BASE_URL "https://my-opensearch-cluster:9200"
WITH USERNAME "myuser"
WITH PASSWORD "mypassword"
WITH USE_SSL TRUE
WITH IGNORE_CERT_ERRORS FALSE
Replace with your OpenSearch connection details from DigitalOcean and run the Route in your LoT Notebook. Important: Use the VPC base URL (not public) when available. The base URL format is typically https://your-cluster-hostname:9200. For OpenSearch Dashboards access, use the separate Dashboards URL provided in your database cluster details. See our OpenSearch quickstart for more details.
Modify your LoT model to use the database route for scalable storage, by adding this to the end of the Model:
STORE IN "OpenSearch_Log"
WITH TABLE "MachineProductionData"
Additionally, add a parameter with the topic, to have a unique identifier for each entry in your managed database.
DEFINE MODEL MachineData WITH TOPIC "Simulator/Machine/+/Data"
ADD "energy" WITH TOPIC "raw_data/+" AS TRIGGER
ADD "device_name" WITH REPLACE "+" WITH TOPIC POSITION 2 IN "+"
ADD "energy_wh" WITH (energy * 1000)
ADD "production_status" WITH (IF energy > 5 THEN "active" ELSE "inactive")
ADD "production_count" WITH (IF production_status EQUALS "active" THEN (production_count + 1) ELSE 0)
ADD "stoppage" WITH (IF production_status EQUALS "inactive" THEN 1 ELSE 0)
ADD "maintenance_alert" WITH (IF energy > 50 THEN TRUE ELSE FALSE)
ADD "timestamp" WITH TIMESTAMP "UTC"
STORE IN "OpenSearch_Log"
WITH TABLE "MachineProductionData"
After you deploy this updated action, all data should be automatically stored in the database when updated.
Connect to your PostgreSQL managed database using DBeaver to verify scalable storage:

As we’ve seen before, all of the data is available in the MQTT Broker for other uses and integrations.

Connect to your MongoDB managed database using MongoDB Compass to verify scalable storage:

You should see real-time data documents with structure similar to:
{
"_id": {
"$oid": "68626dc3e8385cbe9a1666c3"
},
"energy": 36,
"energy_wh": 36000,
"production_status": "active",
"production_count": 31,
"stoppage": 0,
"maintenance_alert": false,
"timestamp": "2025-06-30 10:58:11",
"device_name": "station2"
}
As we’ve seen before, all of the data is available in the MQTT Broker for other uses and integrations.
Connect to your MySQL managed database using DBeaver to verify scalable storage:
coreflux-broker-data database (or the name you gave to the database)MachineProductionData table for stored records
As with the other integrations, all of the data is also available in the MQTT Broker for other uses and downstream integrations.

Open OpenSearch Dashboards with the provided URL and credentials:


As we’ve seen before, all of the data is available in the MQTT Broker for other uses and integrations.

You integrate Coreflux MQTT broker with a managed database by defining a Route in LoT that points to your target service (PostgreSQL, MySQL, MongoDB, or OpenSearch). Each route uses the appropriate connection parameters (server or connection string, port, database name, username, password, and SSL/TLS options) and automatically persists MQTT message payloads into tables, collections, or indexes. Once the route is defined, you attach it to a Model with the STORE IN directive so that every processed message is written to your chosen database.
Yes. Coreflux is designed as a low-code integration layer, so you do not need to write application code or external ETL jobs to persist data. For each database type, you configure a LoT route (for example, PostgreSQL_Log, MySQL_Log, mongo_route, or OpenSearch_Log) and then extend your model with STORE IN "<route_name>" WITH TABLE "MachineProductionData". Coreflux handles connection pooling, retries, and error handling, so you focus on modeling topics and transformations instead of boilerplate database code.
The best managed database for your MQTT IoT data depends on your data structure, query needs, and analytics goals. Use the below comparison table below to help you decide:
| Database | Best For | Example Use Cases |
|---|---|---|
| PostgreSQL | Strong consistency, relational schemas, complex SQL queries | Industrial sensor networks, transactional events, analytics |
| MySQL | Relational data, structured queries, wide compatibility | Inventory systems, production metrics, traditional business records |
| MongoDB | Flexible, evolving schemas; document storage | Connected devices with variable payloads, IoT telemetry with changing formats |
| OpenSearch | Full-text search, analytics, dashboards, log indexing | Time-series analytics, monitoring, event logs, IoT search and visualization |
Tip: You can use more than one managed database at the same time by configuring multiple Coreflux routes. This makes it possible to store structured IoT data in PostgreSQL or MySQL, aggregate logs and metrics in OpenSearch, and collect unstructured or schemaless data in MongoDB, all from the same MQTT stream.
Coreflux keeps all processed values available on MQTT topics for real-time consumption, dashboards, or additional pipelines, while Routes persist the same modeled data to your databases for historical queries. In practice, you can subscribe to topics for immediate reactions (alerts, control loops) and query PostgreSQL/MySQL/MongoDB/OpenSearch for aggregates, trends, and long-term analysis. This dual-path design mirrors common patterns in MQTT and IoT data integration guides, where a broker provides live messaging and databases provide durable storage and analytics.
When deploying on DigitalOcean, you can use VPC networking to keep all communication between your Coreflux MQTT broker and databases private. The VPC isolates your resources from public internet access, and DigitalOcean managed databases support TLS encryption for connections. Additionally, you can create dedicated database users with limited permissions for your Coreflux application, following the principle of least privilege.
Yes. This architecture mirrors patterns used in production MQTT and database integrations, where a broker front-ends device traffic and a managed database tier provides durability and analytics. DigitalOcean managed databases offer automated backups, high availability, and monitoring, while Coreflux MQTT broker can scale horizontally to handle high message throughput. For production, you should also configure firewall rules, use strong credentials, enable TLS for MQTT and database connections, and size your droplets and clusters based on expected message volume.
Yes. MQTT brokers are often deployed in private networks or edge environments, and public resources consistently note that MQTT can work without the public internet as long as clients can reach the broker. With DigitalOcean, you can keep Coreflux and your databases inside a VPC and only expose what is strictly necessary (for example, a VPN, bastion host, or limited firewall rules). You can also synchronize selected topics with other brokers or cloud regions if you need hybrid or multi-site architectures.
MQTT is optimized for lightweight, event-driven messaging; databases are optimized for storage and querying. Storing every raw message can become expensive or noisy, so best practices recommend modeling data carefully (for example, aggregating metrics, filtering topics, or downsampling). Very low-power devices or ultra-constrained networks might struggle with persistent connections or TLS overhead, in which case you may need to tune QoS levels, batching, and retention policies. As long as you design your models and routes with these trade-offs in mind, MQTT plus managed databases works well for most IoT scenarios.
You should choose a managed database based on your IoT data structure, scalability, and how you want to query your device data. The table below summarizes the strengths of each option:
| Database | Best When… | Typical Use Cases | Key Strengths |
|---|---|---|---|
| PostgreSQL | You need complex relational queries, strong consistency, and transactional integrity (ACID support). | Industrial sensor networks, correlating device data with production, needing analytics over joined datasets | Relational schemas, advanced SQL, consistency |
| MySQL | Your workloads are structured, with wide tooling and compatibility needs. | Inventory tracking, traditional business systems, production metrics | Simpler relational needs, broad support |
| MongoDB | Your device payloads and schemas evolve, or you want fast prototyping with flexible, document-based storage. | IoT telemetry with variable formats, rapid development, semi-structured data | Flexible schemas, easy scaling, fast prototyping |
| OpenSearch | You need to analyze, search, or dashboard large volumes of IoT data (logs, time series, events). | Searching sensor data, log analytics, visualization, keyword/time-based queries | Search, full-text, analytics, fast aggregation |
Integrating Coreflux MQTT broker with DigitalOcean’s managed database services (PostgreSQL, MongoDB, MySQL, or OpenSearch) gives you a complete setup for real-time IoT data processing and storage. Following this tutorial, you’ve built an automation pipeline that collects, processes, and stores IoT data using low-code development practices.
With Coreflux’s architecture and your chosen database’s storage features, you can handle large volumes of real-time data and query it for insights. Whether you’re monitoring industrial systems, tracking environmental sensors, or managing smart city infrastructure, this setup lets you make data-driven decisions based on both live MQTT topics and historical database queries.
You can learn more about DigitalOcean managed databases and explore advanced Droplet configurations in the DigitalOcean documentation.
You can try the provided use-cases or implement your own use-cases using Coreflux & DigitalOcean. You can also get the free Coreflux MQTT Broker on DigitalOcean Droplet Marketplace or through Coreflux website.
Learn more about what you can do with Coreflux and LoT in the Coreflux Docs and Tutorials.
Note: While this tutorial uses the Coreflux or Docker Marketplace image for simplified deployment, you can also install Coreflux MQTT broker directly on Ubuntu. For manual installation instructions, visit the Coreflux Installation Guide.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
I help Businesses scale with AI x SEO x (authentic) Content that revives traffic and keeps leads flowing | 3,000,000+ Average monthly readers on Medium | Sr Technical Writer @ DigitalOcean | Ex-Cloud Consultant @ AMEX | Ex-Site Reliability Engineer(DevOps)@Nutanix
Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.