Has anyone used aws-sdk to operate on DO Spaces? How to do it?

Posted September 18, 2017 8.1k views
Node.jsAPIUbuntu 16.04

I’m trying to wrap my head around this but no luck thus far. The DO Documentation states that the Spaces API is pretty much designed .to be used as S3 in most cases. So i’ve been trying to use aws-sdk for NodeJS, to create buckets in Spaces and send files there.
Here is what I’m trying:

var AWS = require("aws-sdk");
var EP = new AWS.Endpoint("");

fs.readFile(file.path, function (err_file, data) {
          if (err_file) throw err_file; // Something went wrong!
          var s3bucket = new AWS.S3({endpoint: EP, params: {Bucket: result._id}});  //MongoDB User id
            var params = {Key:, Body: file}; 
            s3bucket.upload(params, function(err_s3, data_s3){
              fs.unlink(file.path, function (err) {
                if (err) {     
                console.log('Temp File Delete');
              if(err_s3) throw err_s3;        

              return res.json({result: data_s3, err: null});

With this I’m getting the error below:

            throw err;

Error: Unsupported body payload object
at ManagedUpload.self.fillQueue (/Users/jh/Documents/dash/node_modules/aws-sdk/lib/s3/managed_upload.js:90:21)
    at ManagedUpload.send (/Users/jh/Documents/dash/node_modules/aws-sdk/lib/s3/managed_upload.js:199:33)
    at features.constructor.upload (/Users/jh/Documents/dash/node_modules/aws-sdk/lib/services/s3.js:1067:50)
    at Response.<anonymous> (/Users/jh/Documents/dash/server/api/client/client.controller.js:217:22)
    at Request.<anonymous> (/Users/jh/Documents/dash/node_modules/aws-sdk/lib/request.js:364:18)
    at Request.callListeners (/Users/jh/Documents/dash/node_modules/aws-sdk/lib/sequential_executor.js:105:20)

Any help will be appreciated.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
4 answers

I don’t know am I late or not, but I played with this and managed to come up with some solution.
Yes, you can use aws-sdk, and it works. Using your code, I ended up with new bucket and with file in it.

I used the following code:

var AWS = require("aws-sdk");
var EP = new AWS.Endpoint("");

var s3bucket = new AWS.S3({endpoint: EP, params: {Bucket: 'testabc'}}); 
    var params = {Key: 'Test1', Body: 'Hello!'};
    s3bucket.upload(params, function(err_s3, data_s3){
        if(err_s3) throw err_s3;

This created me bucket testabc with file Test1 in it.
As you can see from your code, you made mistake in EP variable.
The endpoint format for bucket is {BUCKET}.{REGION} As you are creating new bucket, you should drop it from the endpoint, so you will have {REGION}

  • Thank you @xMudrii.

    I wanted to add a folder to my existing bucket ‘kdsuserdata’ for each user, and add their datas (photos mostly) to that folder. Now I understood what I’m doing wrong. Thank you for pointing me to the right direction. I’m going to post new version of my code, which’ll create new folder in an existing bucket and add files to it, using aws-sdk.

    Have a nice day! Response for preflight has invalid HTTP status code 403

I Got the problem 403, someone can help

  • I’d check the ACL (Permissions) for that resource. Not the bucket, but the file in it. HTTP403 is “Access Forbidden”. Preflight is HTTP OPTIONS, or sometimes HEAD request to check for CORS. If your resource (the file in the bucket) has private access permission, you cannot access it publicly, you need to be authenticated, and authorized.

    P.S: I couldn’t test this URL right now, it returns HTTP 404.

    • change bucket to actiontwo, Response for preflight has invalid HTTP status code 403

      I add the secret key and access key, but it response the 403 code, it cause by CORS, could you help me how to config that ,
      When i use s3 AWS, i know where to config but digitalocean i don’t know where

    • the is the key i prepare for upload but it got 403 when i try to upload, i used aws-sdk

      my config

      config ={
            endpoint        : ''
            accessKeyId     : creds.access_key
            secretAccessKey : creds.secret_key
          AWS.config.update config
          AWS.config.region = 'NYC3'
          bucket = new (AWS.S3)({
              Bucket: creds.bucket
      edited by asb

Hello! I also try to configure the space instead of AWS, as far as I can see that authorization does not work. I get the error 403. How to sign the header in this case?
Do I understand correctly that I need to rewrite everything to aws4 authorization?

AWS.config.accessKeyId =;
AWS.config.secretAccessKey =;

const bucketName =;
const EP = new AWS.Endpoint("");
const s3bucket = new AWS.S3({ endpoint: EP });

const s3Params = {
    Bucket: bucketName,
    Key: fileName,
    Expires: 60,
    ContentType: file.type,
    ACL: 'public-read',

s3bucket.getSignedUrl('putObject', s3Params, (err, data) => {
    console.log(err, data)
    if (err) {
        debug('image loading error', err);
    }   else {
        const returnData = {
            requestUrl: data,
            imageUrl: `${settings.cdn.full}${fileName}`,