Hi - I was trying to run some basic performance tests on Spaces recently and after a certain level, the requests were getting rejected with 403s (sample below). Could someone indicate what rate limits are enforced on Spaces (open HTTP connections, max API calls, etc)? Thanks

Upload status 403 Forbidden: resp: &{Status:403 Forbidden StatusCode:403 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[X-Amz-Request-Id:[tx00000000000001037eae3-0059ae1784-8e28-nyc3a] Content-Length:[177] Accept-Ranges:[bytes] Content-Type:[application/xml] Date:[Tue, 05 Sep 2017 03:18:28 GMT] Strict-Transport-Security:[max-age=15552000; includeSubDomains; preload] Access-Control-Allow-Origin:[]] Body:0xc4202849c0 ContentLength:177 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc4200d22d0 TLS:0xc4200a86e0}


These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

40 answers

Well I’m still in awe how stubborn DigitalOcean is about this issue most users complain about. We had buckets with 20-30 TB - virtually unusable and moved them away to Wasabi.com. These guys don’t know anything about rate limiting and their API is super fast. You should check them out.

Digital Ocean. I absolutely love you. BUT, this problem is now unacceptable and inexcusable. We will be moving five separate Spaces servers with over 50 spaces to another provider. We will also be advising 20+ of our clients to make the same move.

You have completely fallen down on the job here, and I can no longer afford to waste hundreds of $$$ of my valuable time dealing with this stupidity. It may be time to start blogging to spread the word on what a failure this is.

I’m seeing “503 Slow Down”. Some communication on this would be good…

Having the same issue as jborg.

<Message>Please reduce your request rate.</Message>

I’m looking at my logs, there is no way I’m hitting 200 requests per second, maybe 200 per minute. I’ve effectively been blocked from using a service I’m paying for with no warning.

In the 3 years I was using S3 I never had this issue, even when when under extreme load.

This is unacceptable, I don’t think Spaces is ready for prime time usage based on this.

Sharding your requests across multiple Spaces? How is that a solution? Its supposed to be infinite, and scalable.

  • Got a reply from DO Support:

    Thank you for contacting DigitalOcean.

    I apologize for the inconvenience caused! We’re aware of an issue with the Spaces service that is related to the behaviour you’ve reported. Our Engineering team has fixed the issue and now it should be working fine. Thank you for your patience, and understanding in this matter.

    Please let us know if you have any additional questions, and have a wonderful day!

    NOTE: There was no issue noted on the DO status page during the time of this outage I was experiencing.

  • got the same response on random requests. i never hit the 200 requests/second limit but some random requests got a 503 slow done response. Support have not answered till now. Have simply switched to aws s3. There is no request limit nor random “slow down” responses. I hope DO will fix that asap.

Also, I was getting the 503 slowdown message when using s3cmd to modify metadata and permissions on files. How can I tell s3cmd to slow down?

Just started using Space and get this error message even with about 1-2 request per seconds or sometimes less: “Please reduce your request rate.”

Even when slowing down to 1 request per 5 seconds I get this error.

Something has to be fixed, in the meanwhile, switching back to AWS S3.

I also have this issue. I have 503 errors when i reach only 30 to 40 requests per seconds.

The solution of putting my objects in multiple spaces is too complicated to handle. This is the exact opposit of your value proposition « Simplicity at scale » (homepage)

Same here… Annoying problem

We are getting rate limited as well. We use spaces to store static assets, and we also put a cdn in front of it, but we are getting repeated reports from our customers of their inability to load the images properly in our e-commerce website. This issue has been going on for quite some time now, and the CDN doesn’t fix it. It’s disappointing because we would like to stay in the DO ecosystem for all the IAS
services we need to run our business, but this problem is forcing us out.

We had just had the same error cleaning up old backups using Laravel Backup with the S3 driver https://github.com/spatie/laravel-backup.

An error occurred while cleaning up the backups of App XXX

Exception message: Error executing "ListObjects" on 
AWS HTTP error: Server error: GET https://ourapp.ams3.digitaloceanspaces.com/?prefix=App-XXX%2F2019-07-11-02-00-03.zip%2F&amp;max-keys=1&amp;encoding-type=url 
resulted in a 503 Slow Down response: 
<?xml version="1.0" encoding="UTF-8"?> <Error> <Code>SlowDown</Code>
 <Message>Please reduce your request rate.</Message> (truncated...) SlowDown (server): Please reduce your request rate. - 
<?xml version="1.0" encoding="UTF-8"?> <Error> <Code>SlowDown</Code> 
<Message>Please reduce your request rate.</Message> <RequestId></RequestId> </Error>

Issue mentioned at https://github.com/spatie/laravel-backup/issues/783

Any news? This is still an issue and I’m not reaching the 200 requests per second – or at least provide a detailed log so I have a chance to do a qualified debug.

What does this actually mean?
“ If you plan to push more than 200 requests per second to Spaces, we recommend using the Spaces CDN ”

Does this apply to PUT requests? Because I have millions of static items needed to be PUT into Spaces…and the 150/s PUT rate makes this really difficult. Create 3 spaces/buckets, didn’t see a difference in PUT rate. Still getting throttled or getting slowdown responses.

Shocked I’m seeing this. I’ve been a big DO fan and I’m just coding a simple utility script and it’s saying SlowDown with only TWO files I want to copy within a bucket. So frustrated that something so simple can’t work at the smallest scale. Really reconsidering my commitment with DO and my 4 clients.

bump - any thoughts on this?

I would expect the API would use HTTP 429, and not 403 if it was rate limiting.

I get 403’s in the web upload as well, very annoying.

@simeonpashley @jborg at the moment we are rate limiting individual Spaces (the 503 error) that are receiving more than 200 reqs/s. If you need higher throughput we would ask that you create multiple Spaces and split your objects among them.

Also, since Spaces is not a CDN, we recommend using Spaces as a CDN origin with a 3rd party CDN if you’re trying to serve assets with low latency and high throughput to your end users. (here’s a couple links with configuration examples: https://www.digitalocean.com/community/questions/does-do-spaces-provide-a-cdn-in-front-of-the-storage-for-fast-global-access)

@wavejd Can you file a ticket about the 403’s and share whatever debug / log/ screengrab info you can along with it? This isn’t a rate limiting error AFAIK but we would like to investigate for sure.

@wawawa We are working on a post on exactly that topic (with a focus on architecting for performance and reliability) and will publish it soon (this month, ideally).

I have run into this bug as well. I know for a fact I am not anywhere near 200 req/sec because I have my images mirrored on another server and when I switch to that server and look at apache status it is only doing around 15-20 req/sec.

I was under the impression that the spaces were suitable to use for a production website. Is that not the case? Is there any way to pay more to increase the 200 req/sec limit? I think that would be plenty on average, but we do have traffic spikes at times.

I’m getting this error during development. I’ve run the upload script once. It’s definitely a bug.

I am getting rate-limited as well whilst listing bucket contents. I am issuing well below 1QPS. My bucket is in NYC.

My code is in Go (using github.com/aws/aws-sdk-go) and the error data that I get back is:
ServiceUnavailable: Please reduce your request rate.
status code: 503, request id: , host id:

This is a regression since yesterday, when equivalent code worked. (And I agree that 503 is not an expected response for rate limiting).

Also getting rate limited with –recursoive flag on 40 items. Very likely under 200 req/s.

We also have this issue. It is causing us a lot of problems on a site that is due to launch imminently. The response time of spaces (varies widely and generally very slow), combined with the rate limiting is forcing us to look elsewhere. We would much rather stick with DO. Any solution in sight?

Same problem when doing a sync. Very annoying. DO is wasting a lot of our time!

Having the same issue here while using s3cmd to sync folder, this is extremely annoying.

same issue using sync, getting 503 just fetching remote object list and it definitely isnt at 200+/sec

Getting the same error (seemingly at random) from developing an app that shows a few images.

Hi there as well :)

I am getting randomly 503 errors as well from using s3cmd put for about 1484 files.

Took about 5 minutes to upload all… so.. I strongly doubt I hit anything near to 10req/sec :D

after all, I uploaded 1460 files out of 1484.

24 files went for 503.

not really sure I can live with that. Is there a way to fix this?

same problem here. any solution yet?

  • Unfortunately, there is no solution to be excepted very soon. I’ve tried to reason with DigitalOcean support. They don’t see the problem with this behaviour. They say, that your app should handle this and that you should put a CDN in front of your spaces.

    Honestly, I don’t get what Spaces is actually useable for. We have S3 buckets with Terabytes of storage and a lot of traffic, DO could earn a lot of money. Sadly, we moved now to another S3-alternative.

Same error today :-( I see, DO is fixing this at the moment. I hope in production mode, i don’t get this errors :-)

These errors aren’t always related to rate limiting. IMHO, Spaces simply has capacity problems and can’t handle any production usage ATM. Just wait till you start doing some real work with it, and you start getting timeouts after 300 seconds of delay instead of a fast 503 slow down response…

My recent usage for nyc3 spaces averages 13 requests per MINUTE (average from the last 11 days, counting GET, PUT and DELETE), and I already had to open at least half a dozen tickets due to GET requests failing for an object during hours.... Thats right, hours… I had to wait for hours to be able to access some objects I stored there.

AMS3 spaces works a little better than NYC3, but it does have the same problems, it’s just that in AMS3 when an object gets “stuck” it “unstucks” after 10-60 minutes at most, instead of 2+ hours that often happen in NYC3.

At this exact moment I have an object in NYC3 that can’t be read for 4+ hours. It’s so frustrating that I had to come to community forums to vent.

We had to migrate to S3 (no problems for us now). We tried deploying Fastly CDN with shielding enabled but Spaces was still acting as a bottleneck. As mentioned, Spaces is not ready for a live environment. We will consider moving back once all the issues are resolved as it makes sense to have our stack in one place (saving with internal bandwidth from web server to object storage).

I’m using rclone to migrate files from aws and it’s been fine apart from a few minutes where it seemed to get stuck and spaces appeared to be unavailable. However if I try updating file permissions recursively using s3cmd I get a rate limit warning every few files changed.

@hazelnut is this still a problem?

  • Back in February, we resolved an issue where some Spaces customers were being rate limited even though they were below the threshold — so yes, you should only hit this problem if you’re pushing 200 requests/second or more.

I get this error now. seems like it came back. I am not doing much request.

Just trying out Spaces, and while using the aws cli tool, I’m getting:
An error occurred (SlowDown) when calling the UploadPart operation (reached max retries: 4): Please reduce your request rate.

even when maxconcurrentrequests = 1 and maxqueuesize = 1.

Not exactly a useful service so far.


I was using AWS + S3 in the last 2 years, never saw anything like this, now wham, literally cannot continue with our setup.

This bug is back with only 2 requests.

Submit an Answer