Report this

What is the reason for this report?

Why is my DigitalOcean Droplet bandwidth usage so high all of a sudden?

Posted on March 25, 2026

I have a small Droplet running a couple of Docker containers (web app + API). Traffic hasn’t really changed, but I noticed that my bandwidth usage in the DigitalOcean dashboard suddenly spiked over the last few days.

I’m not doing any large file transfers (at least not intentionally), and my app is pretty lightweight.

What are some common causes for unexpected bandwidth spikes on Droplets? And how would you go about debugging this?



This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
0

Accepted Answer

Hi there,

There are a few common things I’d usually check first:

  • Unexpected outbound traffic (this is the big one) Bandwidth charges are mostly based on outbound traffic, so something in your app (or container) might be making external requests more often than expected.

  • Bots or crawlers Even if your user traffic is stable, bots can hit your app quite aggressively. It might be worth checking your web server logs (Nginx/Apache) to see if there’s been a spike in requests from certain IPs or user agents.

  • Misconfigured services or loops Sometimes a service can get stuck retrying requests (for example, failing API calls or webhooks), which can quietly generate a lot of outbound traffic.

  • Container/image pulls or updates If you’re frequently rebuilding or pulling images (especially large ones), that can add up, particularly in CI/CD setups.

  • Backups or data sync jobs Any scheduled jobs that upload data to external storage (S3-compatible services, APIs, etc.) could also explain a sudden increase.

In terms of debugging, I’d approach it step by step:

  1. Check your application and web server logs for unusual traffic patterns.

  2. Use tools like iftop, nload, or vnstat on the Droplet to see real-time and historical network usage.

  3. Look at outbound connections with something like netstat or ss to see where traffic is going.

  4. If you’re using Docker, check container logs individually, as one container is often the culprit.

DigitalOcean also has a good overview of Droplet bandwidth here: https://docs.digitalocean.com/platform/billing/bandwidth/

If after all that things still look unclear, it might also be worth temporarily stopping individual services/containers one by one to isolate which one is generating the traffic.

A lot of the time it ends up being something pretty boring.

If traffic from real users didn’t change, I’d first suspect bots, crawlers, failed webhooks retrying over and over, or one container talking to some external API way more than it should. Outbound traffic is usually the thing to watch, not just incoming visits.

What I’d do is check nginx/app logs, then look at the droplet itself with something like iftop, vnstat, or nload, and also check Docker container logs one by one. In these cases it’s often just one noisy container or one bad loop causing the whole spike.

If nothing obvious shows up, I’d even stop containers one at a time and see when bandwidth drops. That usually finds the culprit faster than guessing.

Heya, @11575c6d695643cdb40ad47d0f1aea

Few things I’d check:

Bot traffic / scrapers — look at your access logs. Random crawlers and vulnerability scanners can chew through bandwidth fast and you won’t notice unless you look. Tons of IPs hitting the same endpoints is usually the giveaway.

Docker image pulls — if you have images pulling on a schedule, that could add up. Might not be it but worth checking.

Cron jobs — anything doing backups or shipping logs somewhere? crontab -l and see if there’s something you forgot about.

Something less fun — could be a compromised container pushing traffic out. Probably not, but run iftop or nethogs to see what’s actually using bandwidth in real time. If something weird is blasting data out you’ll see it pretty quick.

sudo iftop -n
sudo nethogs

I’d start there and cross-reference with your access logs around when the spike started. The DO dashboard shows bandwidth by day so you can at least narrow down the timeframe.

Hope that this helps!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.