Is there a sensible way to restrict access to the server if the bandwidth is exceeded?

November 25, 2015 985 views
Billing Server Optimization LAMP Stack Ubuntu

I'm under the assumption that DO still don't charge for overuse on bandwidth/storage. However, I want to add protection to future-proof my server. I plan to add my clients on to a new droplet but I'd prefer to upgrade my current one and use that. I know 2TB is a lot as a baseline, but there's that saying, "you never know", ha!

I will be using the LAMP stack. Moreover, is there a server configuration/DO API endpoint I could tap in to/cron job?

Kind Regards,

1 comment
1 Answer


If you were a customer when DigitalOcean was not limiting bandwidth, then you're most likely what is called "grandfathered" in to the original offer (so long as you're not abusing it, causing network issues and so forth). If you signed up when DigitalOcean began limiting bandwidth, then you would be set to whatever limit is in place for the Droplet you choose.

As for restricting access, by default, there's not an option to restrict or shutdown a Droplet should it hit or begin to exceed it's bandwidth allocation. Unfortunately, the current API does not provide physical bandwidth usage however, if you can code, there's always a solution :-).

Before going in to really technical detail, let's create a simple example. Let's say your web server is Server A and you have three client web servers (B, C and D), each hosting one domain. All are using NGINX.


1). Setup each Client Server (B, C and D) to log access to /var/log/nginx/access/access.log

2). Setup each Client Server to rotate logs and maintain 7 days of data. Rotated logs could be stored to /usr/local/src/nginx/logs/access/ (for example). Ideally, store these logs away from any primary log storage directory.

3). At the end of each week, the past seven days of logs should be compressed as a .tar.gz file to an "archive" directory, away from the single log files (i.e. /usr/local/src/nginx/logs/rotated). You can then wipe the log files in /usr/local/src/nginx/logs/access/ (once you've validated the integrity of the archive).

4). Now your server would need to fire off a script (bash, PHP, Python, NodeJS etc) that logs in to each server (here's where that Droplet information comes in to play), downloads the Archive files, extracts each one (in to a dedicated directory for each client) and then reads each log file while adding up the payload size of each request. The totals would then be logged to a file or database for that user. At this point, you would simply do a bit of basic math to divide out the bytes giving you MB, GB and TB usage.


Of course, this is a very basic example and only uses NGINX logs. The more complex the setup, the more complex your scripts will be. This is more so to give you an idea of what you could do.

You could also, just as well, offload the logging to an external service and then, provided said service has an API too, query their API every X hours or minutes and pull the logs down this way.

For example, Loggly or PaperTrail may be of interest. These would be ideal, though they do add to the cost of management. If you prefer to do this on a limited or no budget, Bash/Shell + [insert-programming-language] is going to be the best route. It's also going to be the most private and allow the most control.

  • @jtittle

    Thank you for the amazing and fast response. This is far by the best answer I've seen in comparison to the related ones I had read previously! There's not enough words to thank you, but I'll take your advice and learn from it. Very helpful use-cases, also!

    Using Apache, I will most likely be following a similar route and use logging through a cloud service as you mentioned.

    Most hosted websites will be basic anyway and won't require much storage, but it would be good to hook in to Digital Ocean/Apache server configs and be able to send/receive data via an external service.

    Again, thank you greatly for the in-depth use-case. Appreciate it!


    • @DevJMD

      No problem at all :-).

      To expand on my previous post, if you use DigitalOcean's API, make sure you're storing the data pulled down to prevent repetitive calls to the API. For instance, on the first call, if successful, store the response as a .json file and then parse the file. This will be much faster than if you physically call the API every time your script loads. Since you're mainly looking for the IP & Hostname for each Droplet, there's no reason to make a call on each load.

      Since logging services can add to your monthly costs and you mention that you'll only be handling a few basic sites, to save a bit, you'd probably be better off doing this on your own. Both of the services I mentioned are super nice, though they're also overkill if you're not processing a ton of data.

Have another answer? Share your knowledge.