Speed of Spaces

December 4, 2017 915 views


How is the speed and connection quality of Spaces. is it fast? Faster than block storage?
What is the max client connection for example. Is it like an Apache server?



1 comment
8 Answers

I just tested it and it is pretty slow if you have multiple connections. A page of pictures (thumbs) is much faster with block storage.
With movies (bigger files) the speed is good though.

Lots of gateway errors and gateway timeouts.
Spaces is not reliable (yet).

I am experiencing 4~7 seconds (4000ms) latency retrieving some objects, I hope DO working on better and performing Spaces

It is rather slow either the more people who are using it or the more files you put on it. I have 1.3 million files and it is terrible slow now. I'm not very happy with it anymore. especially since I have a lot of 404 errors trying to load one of my homepages much like a Netflix style gallery for books. that is using a proxy pass in nginx that worked perfect in beta now it is unreliable and I don't think it is my server sending the 404's

How did you upload 1.3m files?
With s3cmd? I tried 20k but it was way too slow and i stopped.

  • The files are mainly compressed images, videos and pdf's for my social networks I have built and manage. We uploaded the 1.3million files roughly 83GB total of data over the course of 16 hours or so I kind of just let my droplet migrate its data over and then I rebuilt the networks to use the spaces s3 instead of direct filesystem access over the rest of the day definitely pulled an all nighter haha.

    I also just wanted to update that I don't get 404 errors when I just 301 redirect with nginx as opposed to a proxy_pass configuration. it seems to support more parallels access this way.

    I will add the smaller the files you try to put on s3 the slower it will be because of the way the network speed gains acceleration naturally during a communication.( this is my experience and that is the best I know how to describe it. I'm sure a networking major could better elaborate on this behavior) see if I had a 100mbps connection and I tried to shove a million files each upload stream would have to start with zero mbps and then it would gain speed as it transmits until the file is almost uploaded then it decelerates. So, you don't really ever reach 100mbps throughput unless you use mainly large files or you have a upload mechanism that supports simultaneous uploads.

    I think I just used the php s3 sdk from amazon and did a directory sync. worked great. 10,000+ directories, 1.3m files, most averaging < 4mb.

PHP AWS SDK directory sync did it in less than 16 hours from a digital ocean droplet in SF2.
Using a 301 redirect in NGINX solved my 404 errors I was having.
My files are all less than 6MB and greater than 2KB totally 83GB currently.
They are rather compressed images, videos & pdf's.

  • Thanks for pointing me to the php AWS SDK. May solve the upload speed due to concurrency option.

  • The php AWS SDK works only with Amazon. Did you change the code? Can you give an example how you did upload the files using the SDK?

    • <?php
      // php script to upload a directory to an s3 storage service put together by Blaine Miller https://blaineam.com I hold no liability and offer no warranty for the use of this script.
      define('AWS_KEY', 'yours3key');
      $HOST = 'https://nyc3.digitaloceanspaces.com';
      $bucket = 'yourbucketname';
      // require the amazon sdk for php library using version 2.8.8(+)
      require_once 'resources/libraries/aws/aws-autoloader.php';
      use Aws\S3\S3Client;
      // Instantiate the S3 client with your AWS credentials
      $s3 = S3Client::factory(array(
          'version' => 'latest',
          'region' => 'US',
          'endpoint' => $HOST,
          'credentials' => array(
              'key'    => AWS_KEY,
              'secret' => AWS_SECRET_KEY,
              'base_url' => $HOST
      // used to make life way easier with php
      // actually upload the directory
      $s3->uploadDirectory('path/to/directory/you/want/to/sync', $bucket, 'path/for/folder/in/s3/bucket', array( 
      // uncomment the line below to set acl attributes for the synced files
      //'params'      => array('ACL' => 'public-read'),
          'concurrency' => 20,//set how many concurrent connections to s3
          'debug'       => true//enable verbose output
    • I didn’t modify anything I just used an older s3 SDK worked perfectly

I am proxying it until the service is improved, also feel optimistic for the service when trial period is over, it's pretty young now. The number of objects should not matter unless frequent listing is required, even so ... the limit XML returns is 1000

Ah. I was just thinking to move a couple of TBs from S3 to DO Spaces but it seems it's not ready to handle good amount of traffic. Hope they sort out the speed issues, not much use of it otherwise.

Have another answer? Share your knowledge.