rate limiting on Spaces?

September 27, 2017 2.2k views
Object Storage

Hi - I was trying to run some basic performance tests on Spaces recently and after a certain level, the requests were getting rejected with 403s (sample below). Could someone indicate what rate limits are enforced on Spaces (open HTTP connections, max API calls, etc)? Thanks

Upload status 403 Forbidden: resp: &{Status:403 Forbidden StatusCode:403 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[X-Amz-Request-Id:[tx00000000000001037eae3-0059ae1784-8e28-nyc3a] Content-Length:[177] Accept-Ranges:[bytes] Content-Type:[application/xml] Date:[Tue, 05 Sep 2017 03:18:28 GMT] Strict-Transport-Security:[max-age=15552000; includeSubDomains; preload] Access-Control-Allow-Origin:[]] Body:0xc4202849c0 ContentLength:177 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc4200d22d0 TLS:0xc4200a86e0}

16 Answers

I'm seeing "503 Slow Down". Some communication on this would be good...

Having the same issue as jborg.

<Message>Please reduce your request rate.</Message>

I'm looking at my logs, there is no way I'm hitting 200 requests per second, maybe 200 per minute. I've effectively been blocked from using a service I'm paying for with no warning.

In the 3 years I was using S3 I never had this issue, even when when under extreme load.

This is unacceptable, I don't think Spaces is ready for prime time usage based on this.

Sharding your requests across multiple Spaces? How is that a solution? Its supposed to be infinite, and scalable.

  • Got a reply from DO Support:

    Thank you for contacting DigitalOcean.

    I apologize for the inconvenience caused! We're aware of an issue with the Spaces service that is related to the behaviour you've reported. Our Engineering team has fixed the issue and now it should be working fine. Thank you for your patience, and understanding in this matter.

    Please let us know if you have any additional questions, and have a wonderful day!

    NOTE: There was no issue noted on the DO status page during the time of this outage I was experiencing.

  • got the same response on random requests. i never hit the 200 requests/second limit but some random requests got a 503 slow done response. Support have not answered till now. Have simply switched to aws s3. There is no request limit nor random "slow down" responses. I hope DO will fix that asap.

Also, I was getting the 503 slowdown message when using s3cmd to modify metadata and permissions on files. How can I tell s3cmd to slow down?

bump - any thoughts on this?

I would expect the API would use HTTP 429, and not 403 if it was rate limiting.

I get 403's in the web upload as well, very annoying.

@simeonpashley @jborg at the moment we are rate limiting individual Spaces (the 503 error) that are receiving more than 200 reqs/s. If you need higher throughput we would ask that you create multiple Spaces and split your objects among them.

Also, since Spaces is not a CDN, we recommend using Spaces as a CDN origin with a 3rd party CDN if you're trying to serve assets with low latency and high throughput to your end users. (here's a couple links with configuration examples: https://www.digitalocean.com/community/questions/does-do-spaces-provide-a-cdn-in-front-of-the-storage-for-fast-global-access)

@wavejd Can you file a ticket about the 403's and share whatever debug / log/ screengrab info you can along with it? This isn't a rate limiting error AFAIK but we would like to investigate for sure.

  • @johngannon thanks for answer,

    Currently, i'm using 3 Droplets with GlusterFS replication ( not taking risk of lose all images at once ) for Laravel static images (over 20k images, total size 1.5GB+), the only problems is storage size limitation of Droplets and shared cpu/ram process with web server.

    I'm considering moving to Spaces because it's not have size limitation.

    But I can tell it's easy to hit 200req/s when you have multiple user.

    In my testing with Spaces, one client also hit the limit when have 50 pictures load under 2seconds.

    Maybe there is bottleneck in server side (maybe the spin drives like Seagate 10TB ironwolf 4k Random Read rate at 2k/s), but limitation 200req/s quite small and not very efficient.

    Create multiple Spaces ? I'm not sure, but it's quite complex to handle.

    Maybe using CDN is the solution? or maybe come back later after read https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

    • For your use case, putting a CDN in front of Spaces (or your droplets) would probably be your best bet.

      • @johngannon wouldn't it be great if someone from DigitalOcean or Community post about "Best Practice With Spaces".
        (well, not from me as i'm newcomer without experience with object storage and English is my third language in daily usage)

@wawawa We are working on a post on exactly that topic (with a focus on architecting for performance and reliability) and will publish it soon (this month, ideally).

I have run into this bug as well. I know for a fact I am not anywhere near 200 req/sec because I have my images mirrored on another server and when I switch to that server and look at apache status it is only doing around 15-20 req/sec.

I was under the impression that the spaces were suitable to use for a production website. Is that not the case? Is there any way to pay more to increase the 200 req/sec limit? I think that would be plenty on average, but we do have traffic spikes at times.

I also have this issue. I have 503 errors when i reach only 30 to 40 requests per seconds.

The solution of putting my objects in multiple spaces is too complicated to handle. This is the exact opposit of your value proposition « Simplicity at scale » (homepage)

We are getting rate limited as well. We use spaces to store static assets, and we also put a cdn in front of it, but we are getting repeated reports from our customers of their inability to load the images properly in our e-commerce website. This issue has been going on for quite some time now, and the CDN doesn't fix it. It's disappointing because we would like to stay in the DO ecosystem for all the IAS
services we need to run our business, but this problem is forcing us out.

I'm getting this error during development. I've run the upload script once. It's definitely a bug.

I am getting rate-limited as well whilst listing bucket contents. I am issuing well below 1QPS. My bucket is in NYC.

My code is in Go (using github.com/aws/aws-sdk-go) and the error data that I get back is:
ServiceUnavailable: Please reduce your request rate.
status code: 503, request id: , host id:

This is a regression since yesterday, when equivalent code worked. (And I agree that 503 is not an expected response for rate limiting).

Also getting rate limited with --recursoive flag on 40 items. Very likely under 200 req/s.

We also have this issue. It is causing us a lot of problems on a site that is due to launch imminently. The response time of spaces (varies widely and generally very slow), combined with the rate limiting is forcing us to look elsewhere. We would much rather stick with DO. Any solution in sight?

Have another answer? Share your knowledge.