rate limiting on Spaces?

Posted September 27, 2017 25.6k views
Object Storage

Hi - I was trying to run some basic performance tests on Spaces recently and after a certain level, the requests were getting rejected with 403s (sample below). Could someone indicate what rate limits are enforced on Spaces (open HTTP connections, max API calls, etc)? Thanks

Upload status 403 Forbidden: resp: &{Status:403 Forbidden StatusCode:403 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[X-Amz-Request-Id:[tx00000000000001037eae3-0059ae1784-8e28-nyc3a] Content-Length:[177] Accept-Ranges:[bytes] Content-Type:[application/xml] Date:[Tue, 05 Sep 2017 03:18:28 GMT] Strict-Transport-Security:[max-age=15552000; includeSubDomains; preload] Access-Control-Allow-Origin:[]] Body:0xc4202849c0 ContentLength:177 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc4200d22d0 TLS:0xc4200a86e0}


These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
42 answers

Well I’m still in awe how stubborn DigitalOcean is about this issue most users complain about. We had buckets with 20-30 TB - virtually unusable and moved them away to These guys don’t know anything about rate limiting and their API is super fast. You should check them out.

  • oh my god! I seriously looking for this type! I 1st thought B2 cloud storage is way cheaper but I read the docs carefully! There are transaction charges! 0.004/1000req any type of requests! But this help me a lot! Do they have a search/filter feature like Digital Ocean?

  • read their policy?

    For example, if you store 100 TB with Wasabi and download 100 TB or less within a monthly billing cycle, then your storage use case is a good fit for our free egress policy. If your monthly downloads exceed 100 TB, then your use case is not a good fit.

    If your storage use case exceeds the guidelines of our free egress policy on a regular basis, we reserve the right to limit or suspend your service.

Digital Ocean. I absolutely love you. BUT, this problem is now unacceptable and inexcusable. We will be moving five separate Spaces servers with over 50 spaces to another provider. We will also be advising 20+ of our clients to make the same move.

You have completely fallen down on the job here, and I can no longer afford to waste hundreds of $$$ of my valuable time dealing with this stupidity. It may be time to start blogging to spread the word on what a failure this is.

I’m seeing “503 Slow Down”. Some communication on this would be good…

Having the same issue as jborg.

<Message>Please reduce your request rate.</Message>

I’m looking at my logs, there is no way I’m hitting 200 requests per second, maybe 200 per minute. I’ve effectively been blocked from using a service I’m paying for with no warning.

In the 3 years I was using S3 I never had this issue, even when when under extreme load.

This is unacceptable, I don’t think Spaces is ready for prime time usage based on this.

Sharding your requests across multiple Spaces? How is that a solution? Its supposed to be infinite, and scalable.

  • Got a reply from DO Support:

    Thank you for contacting DigitalOcean.

    I apologize for the inconvenience caused! We’re aware of an issue with the Spaces service that is related to the behaviour you’ve reported. Our Engineering team has fixed the issue and now it should be working fine. Thank you for your patience, and understanding in this matter.

    Please let us know if you have any additional questions, and have a wonderful day!

    NOTE: There was no issue noted on the DO status page during the time of this outage I was experiencing.

  • got the same response on random requests. i never hit the 200 requests/second limit but some random requests got a 503 slow done response. Support have not answered till now. Have simply switched to aws s3. There is no request limit nor random “slow down” responses. I hope DO will fix that asap.

Also, I was getting the 503 slowdown message when using s3cmd to modify metadata and permissions on files. How can I tell s3cmd to slow down?

Just started using Space and get this error message even with about 1-2 request per seconds or sometimes less: “Please reduce your request rate.”

Even when slowing down to 1 request per 5 seconds I get this error.

Something has to be fixed, in the meanwhile, switching back to AWS S3.

I also have this issue. I have 503 errors when i reach only 30 to 40 requests per seconds.

The solution of putting my objects in multiple spaces is too complicated to handle. This is the exact opposit of your value proposition « Simplicity at scale » (homepage)

Same here… Annoying problem

We are getting rate limited as well. We use spaces to store static assets, and we also put a cdn in front of it, but we are getting repeated reports from our customers of their inability to load the images properly in our e-commerce website. This issue has been going on for quite some time now, and the CDN doesn’t fix it. It’s disappointing because we would like to stay in the DO ecosystem for all the IAS
services we need to run our business, but this problem is forcing us out.

We had just had the same error cleaning up old backups using Laravel Backup with the S3 driver

An error occurred while cleaning up the backups of App XXX

Exception message: Error executing "ListObjects" on 
AWS HTTP error: Server error: GET;max-keys=1&amp;encoding-type=url 
resulted in a 503 Slow Down response: 
<?xml version="1.0" encoding="UTF-8"?> <Error> <Code>SlowDown</Code>
 <Message>Please reduce your request rate.</Message> (truncated...) SlowDown (server): Please reduce your request rate. - 
<?xml version="1.0" encoding="UTF-8"?> <Error> <Code>SlowDown</Code> 
<Message>Please reduce your request rate.</Message> <RequestId></RequestId> </Error>

Issue mentioned at

Previous 1 2 3 4 5 Next