rate limiting on Spaces?

September 27, 2017 5.2k views
Object Storage

Hi - I was trying to run some basic performance tests on Spaces recently and after a certain level, the requests were getting rejected with 403s (sample below). Could someone indicate what rate limits are enforced on Spaces (open HTTP connections, max API calls, etc)? Thanks

Upload status 403 Forbidden: resp: &{Status:403 Forbidden StatusCode:403 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[X-Amz-Request-Id:[tx00000000000001037eae3-0059ae1784-8e28-nyc3a] Content-Length:[177] Accept-Ranges:[bytes] Content-Type:[application/xml] Date:[Tue, 05 Sep 2017 03:18:28 GMT] Strict-Transport-Security:[max-age=15552000; includeSubDomains; preload] Access-Control-Allow-Origin:[]] Body:0xc4202849c0 ContentLength:177 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc4200d22d0 TLS:0xc4200a86e0}

1 comment
  • Also getting a 503. I set up the space last night and since then have uploaded a total of 5 images. Why is this happening when I am no where near the limiting rate?

27 Answers

Having the same issue as jborg.

<Message>Please reduce your request rate.</Message>

I'm looking at my logs, there is no way I'm hitting 200 requests per second, maybe 200 per minute. I've effectively been blocked from using a service I'm paying for with no warning.

In the 3 years I was using S3 I never had this issue, even when when under extreme load.

This is unacceptable, I don't think Spaces is ready for prime time usage based on this.

Sharding your requests across multiple Spaces? How is that a solution? Its supposed to be infinite, and scalable.

  • Got a reply from DO Support:

    Thank you for contacting DigitalOcean.

    I apologize for the inconvenience caused! We're aware of an issue with the Spaces service that is related to the behaviour you've reported. Our Engineering team has fixed the issue and now it should be working fine. Thank you for your patience, and understanding in this matter.

    Please let us know if you have any additional questions, and have a wonderful day!

    NOTE: There was no issue noted on the DO status page during the time of this outage I was experiencing.

  • got the same response on random requests. i never hit the 200 requests/second limit but some random requests got a 503 slow done response. Support have not answered till now. Have simply switched to aws s3. There is no request limit nor random "slow down" responses. I hope DO will fix that asap.

Also, I was getting the 503 slowdown message when using s3cmd to modify metadata and permissions on files. How can I tell s3cmd to slow down?

I'm seeing "503 Slow Down". Some communication on this would be good...

Just started using Space and get this error message even with about 1-2 request per seconds or sometimes less: "Please reduce your request rate."

Even when slowing down to 1 request per 5 seconds I get this error.

Something has to be fixed, in the meanwhile, switching back to AWS S3.

bump - any thoughts on this?

I would expect the API would use HTTP 429, and not 403 if it was rate limiting.

I get 403's in the web upload as well, very annoying.

@simeonpashley @jborg at the moment we are rate limiting individual Spaces (the 503 error) that are receiving more than 200 reqs/s. If you need higher throughput we would ask that you create multiple Spaces and split your objects among them.

Also, since Spaces is not a CDN, we recommend using Spaces as a CDN origin with a 3rd party CDN if you're trying to serve assets with low latency and high throughput to your end users. (here's a couple links with configuration examples: https://www.digitalocean.com/community/questions/does-do-spaces-provide-a-cdn-in-front-of-the-storage-for-fast-global-access)

@wavejd Can you file a ticket about the 403's and share whatever debug / log/ screengrab info you can along with it? This isn't a rate limiting error AFAIK but we would like to investigate for sure.

@wawawa We are working on a post on exactly that topic (with a focus on architecting for performance and reliability) and will publish it soon (this month, ideally).

I have run into this bug as well. I know for a fact I am not anywhere near 200 req/sec because I have my images mirrored on another server and when I switch to that server and look at apache status it is only doing around 15-20 req/sec.

I was under the impression that the spaces were suitable to use for a production website. Is that not the case? Is there any way to pay more to increase the 200 req/sec limit? I think that would be plenty on average, but we do have traffic spikes at times.

I also have this issue. I have 503 errors when i reach only 30 to 40 requests per seconds.

The solution of putting my objects in multiple spaces is too complicated to handle. This is the exact opposit of your value proposition « Simplicity at scale » (homepage)

We are getting rate limited as well. We use spaces to store static assets, and we also put a cdn in front of it, but we are getting repeated reports from our customers of their inability to load the images properly in our e-commerce website. This issue has been going on for quite some time now, and the CDN doesn't fix it. It's disappointing because we would like to stay in the DO ecosystem for all the IAS
services we need to run our business, but this problem is forcing us out.

I'm getting this error during development. I've run the upload script once. It's definitely a bug.

I am getting rate-limited as well whilst listing bucket contents. I am issuing well below 1QPS. My bucket is in NYC.

My code is in Go (using github.com/aws/aws-sdk-go) and the error data that I get back is:
ServiceUnavailable: Please reduce your request rate.
status code: 503, request id: , host id:

This is a regression since yesterday, when equivalent code worked. (And I agree that 503 is not an expected response for rate limiting).

Also getting rate limited with --recursoive flag on 40 items. Very likely under 200 req/s.

We also have this issue. It is causing us a lot of problems on a site that is due to launch imminently. The response time of spaces (varies widely and generally very slow), combined with the rate limiting is forcing us to look elsewhere. We would much rather stick with DO. Any solution in sight?

Same problem when doing a sync. Very annoying. DO is wasting a lot of our time!

Having the same issue here while using s3cmd to sync folder, this is extremely annoying.

same issue using sync, getting 503 just fetching remote object list and it definitely isnt at 200+/sec

Getting the same error (seemingly at random) from developing an app that shows a few images.

Hi there as well :)

I am getting randomly 503 errors as well from using s3cmd put for about 1484 files.

Took about 5 minutes to upload all... so.. I strongly doubt I hit anything near to 10req/sec :D

after all, I uploaded 1460 files out of 1484.

24 files went for 503.

not really sure I can live with that. Is there a way to fix this?

same problem here. any solution yet?

  • Unfortunately, there is no solution to be excepted very soon. I've tried to reason with DigitalOcean support. They don't see the problem with this behaviour. They say, that your app should handle this and that you should put a CDN in front of your spaces.

    Honestly, I don't get what Spaces is actually useable for. We have S3 buckets with Terabytes of storage and a lot of traffic, DO could earn a lot of money. Sadly, we moved now to another S3-alternative.

    • well, i just created an s3 on AWS, works like a charm.

    • @louisbrauer I can't afford Amazon S3 bandwidth, my requirements are ~800GB egress bandwidth per month and ~500GB storage, if your alternative is cheaper, can you please recommend to drwyrm at gmail? Thanks!

Same error today :-( I see, DO is fixing this at the moment. I hope in production mode, i don't get this errors :-)

These errors aren't always related to rate limiting. IMHO, Spaces simply has capacity problems and can't handle any production usage ATM. Just wait till you start doing some real work with it, and you start getting timeouts after 300 seconds of delay instead of a fast 503 slow down response...

My recent usage for nyc3 spaces averages 13 requests per MINUTE (average from the last 11 days, counting GET, PUT and DELETE), and I already had to open at least half a dozen tickets due to GET requests failing for an object during hours.... Thats right, hours... I had to wait for hours to be able to access some objects I stored there.

AMS3 spaces works a little better than NYC3, but it does have the same problems, it's just that in AMS3 when an object gets "stuck" it "unstucks" after 10-60 minutes at most, instead of 2+ hours that often happen in NYC3.

At this exact moment I have an object in NYC3 that can't be read for 4+ hours. It's so frustrating that I had to come to community forums to vent.

We had to migrate to S3 (no problems for us now). We tried deploying Fastly CDN with shielding enabled but Spaces was still acting as a bottleneck. As mentioned, Spaces is not ready for a live environment. We will consider moving back once all the issues are resolved as it makes sense to have our stack in one place (saving with internal bandwidth from web server to object storage).

I'm using rclone to migrate files from aws and it's been fine apart from a few minutes where it seemed to get stuck and spaces appeared to be unavailable. However if I try updating file permissions recursively using s3cmd I get a rate limit warning every few files changed.

Have another answer? Share your knowledge.