How to use DigitalOcean Spaces with the AWS S3 SDKs?

November 10, 2017 9.2k views
Development Python Node.js Go PHP

DigitalOcean Spaces was designed to be inter-operable with the AWS S3 API in order allow users to continue using the tools they are already working with. In most cases, using Spaces with an existing S3 library requires configuring the endpoint value to be ${REGION}.digitaloceanspaces.com Though often how to change that setting is not well documented as examples tend to use the default AWS values. Third-party libraries tend to be better with this as they will support alternative, self-hosted object storage implementations like Mino or Ceph.

In the answers, let's share some basic examples of working with Spaces using the AWS SDKs in various languages.

8 Answers

PHP - (AWS docs)

<?php

// Included aws/aws-sdk-php via Composer's autoloader
// Installed with: composer.phar require aws/aws-sdk-php
require 'vendor/autoload.php';
use Aws\S3\S3Client;

// Configure a client using Spaces
$client = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => 'nyc3',
        'endpoint' => 'https://nyc3.digitaloceanspaces.com',
        'credentials' => [
                'key'    => 'ACCESS_KEY',
                'secret' => 'SECRET_KEY',
            ],
]);

// Create a new Space
$client->createBucket([
    'Bucket' => 'my-new-space-with-a-unique-name',
]);

// Listing all Spaces in the region
$spaces = $client->listBuckets();
foreach ($spaces['Buckets'] as $space){
    echo $space['Name']."\n";
}


// Upload a file to the Space
$insert = $client->putObject([
     'Bucket' => 'my-new-space-with-a-unique-name',
     'Key'    => 'file.ext',
     'Body'   => 'The contents of the file'
]);

JavaScript - (AWS docs)

const AWS = require('aws-sdk')

// Configure client for use with Spaces
const spacesEndpoint = new AWS.Endpoint('nyc3.digitaloceanspaces.com');
const s3 = new AWS.S3({
    endpoint: spacesEndpoint,
    accessKeyId: 'ACCESS_KEY',
    secretAccessKey: 'SECRET_KEY'
});


// Create a new Space
var params = {
    Bucket: "my-new-space-with-a-unique-name"
};

s3.createBucket(params, function(err, data) {
    if (err) console.log(err, err.stack);
    else     console.log(data);
});

// List all Spaces in the region
s3.listBuckets({}, function(err, data) {
    if (err) console.log(err, err.stack);
        else {
            data['Buckets'].forEach(function(space) {
            console.log(space['Name']);
        })};
    });

// Add a file to a Space
var params = {
    Body: "The contents of the file",
    Bucket: "my-new-space-with-a-unique-name",
    Key: "file.ext",
};

s3.putObject(params, function(err, data) {
    if (err) console.log(err, err.stack);
    else     console.log(data);
});
  • Also, setting custom headers since some of them are required:

          await S3.putObject(params)
            .on('build', request => {
              request.httpRequest.headers.Host = `https://${BUCKET}.${REGION}.digitaloceanspaces.com`;
              request.httpRequest.headers['Content-Length'] = file.size;
              request.httpRequest.headers['Content-Type'] = file.mimetype;
              request.httpRequest.headers['x-amz-acl'] = 'public-read';
            })
            .send((err, data) => {
              if (err) console.log(err, err.stack);
              else console.log(data);
            });
    
  • I don't know if it was just my specific setup, but this did not work for me. I got a series of cryptic messages, which eventually boiled down to the keys not being passed in correctly and aws-sdk looking for ~/.aws/credentials

    This worked for me:

      const s3 = new AWS.S3({
        endpoint: spacesEndpoint,
        credentials: new AWS.Credentials({
          accessKeyId: 'ACCESS_KEY',
          secretAccessKey: 'SECRET_KEY'
        })
      })
    
    • Okay, looks like both methods actually work. However, if you pass the key values directly to the constructor and they are invalid you will get a horrible error message that tries it's hardest to look like some kind of network problem. If you pass a Credentials object to the S3 constructor and they are invalid keys then it will give you a much more sensible "Missing credentials" message.

  • I'm trying to create bucket and I'm receiving a not explanatory error response (InvalidRequest: Malformed request), any ideas?

    { InvalidRequest: Malformed request
        at Request.extractError (/var/nodejs/storage/node_modules/aws-sdk/lib/services/s3.js:577:35)
        at Request.callListeners (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
        at Request.emit (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
        at Request.emit (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:683:14)
        at Request.transition (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:22:10)
        at AcceptorStateMachine.runTo (/var/nodejs/storage/node_modules/aws-sdk/lib/state_machine.js:14:12)
        at /var/nodejs/storage/node_modules/aws-sdk/lib/state_machine.js:26:10
        at Request.<anonymous> (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:38:9)
        at Request.<anonymous> (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:685:12)
        at Request.callListeners (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
      message: 'Malformed request',
      code: 'InvalidRequest',
      region: null,
      time: 2018-05-03T22:16:36.740Z,
      requestId: null,
      extendedRequestId: undefined,
      cfId: undefined,
      statusCode: 400,
      retryable: false,
      retryDelay: 52.89127599113917 }
    

Go - (AWS Docs)

package main

import (
    "fmt"
    "strings"

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/credentials"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/s3"
)

func main() {
    // Initialize a client using Spaces
    s3Config := &aws.Config{
        Credentials: credentials.NewStaticCredentials("ACCESS_KEY", "SECRET_KEY", ""),
        Endpoint:    aws.String("https://nyc3.digitaloceanspaces.com"),
        Region:      aws.String("us-east-1"), // This is counter intuitive, but it will fail with a non-AWS region name.
    }

    newSession := session.New(s3Config)
    s3Client := s3.New(newSession)

    // Create a new Space
    params := &s3.CreateBucketInput{
        Bucket: aws.String("my-new-space-with-a-unique-name"),
    }

    _, err := s3Client.CreateBucket(params)
    if err != nil {
        fmt.Println(err.Error())
        return
    }

    // List all Spaces in the region
    spaces, err := s3Client.ListBuckets(nil)
    if err != nil {
        fmt.Println(err.Error())
        return
    }

    for _, b := range spaces.Buckets {
        fmt.Printf("%s\n", aws.StringValue(b.Name))
    }

    // Upload a file to the Space
    object := s3.PutObjectInput{
        Body:   strings.NewReader("The contents of the file"),
        Bucket: aws.String("my-new-space-with-a-unique-name"),
        Key:    aws.String("file.ext"),
    }
    _, err = s3Client.PutObject(&object)
    if err != nil {
        fmt.Println(err.Error())
        return
    }
}

Kotlin (& Java) - [SDK]

You only need to setup "withEndpointConfiguration" correctly and then you can use others API normally, example here or in Amazon official document

var s3Client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(AwsClientBuilder.EndpointConfiguration("https://sgp1.digitaloceanspaces.com", "sgp1"))
.withCredentials(AWSStaticCredentialsProvider(BasicAWSCredentials("key", "secret")))
                .build()

s3Client.listObjects("bucketName")

Working C# example using Amazon AWSSDK (V2.x).

IAmazonS3 amazonS3Client = 
    AWSClientFactory.CreateAmazonS3Client("your-spaces-key", "your-spaces-key-secrete",
    new AmazonS3Config
    {
       ServiceURL = "https://nyc3.digitaloceanspaces.com"
    }
);
var myBuckets = amazonS3Client.ListBuckets();

Ruby - (AWS docs)

require 'aws-sdk-s3'

# Configure client for use with Spaces
client = Aws::S3::Client.new(
  access_key_id: 'ACCESS_KEY',
  secret_access_key: 'SECRET_KEY',
  endpoint: 'https://nyc3.digitaloceanspaces.com',
  region: 'nyc3'
)

# Create a new Space
client.create_bucket({
  bucket: "my-new-space-with-a-unique-name",
  acl: "private"
})

# List all Spaces
spaces =  client.list_buckets()
spaces.buckets.each do |space|
  puts "#{space.name}"
end

# Add a file to a Space
client.put_object({
  body: "The contents of the file",
  bucket: "my-new-space-with-a-unique-name",
  key: "file-name.txt"
})

Python 3 - (Boto Docs)

import boto3

# Initialize a session using Spaces
session = boto3.session.Session()
client = session.client('s3',
                        region_name='nyc3',
                        endpoint_url='https://nyc3.digitaloceanspaces.com',
                        aws_access_key_id='ACCESS_KEY',
                        aws_secret_access_key='SECRET_KEY')

# Create a new Space
client.create_bucket(Bucket='my-new-space-with-a-unique-name')

# List all Spaces in the region
response = client.list_buckets()
for s in [space['Name'] for space in response['Buckets']]:
    print(s)

# Add a file to a Space
client.upload_file('/path/to/file.ext',
                   'my-new-space-with-a-unique-name',
                   'file.ext')

Hi,
We are trying to access the Private files from Spaces using Elfinder File Manager. But not able to access when it is having Private Access. So we are not able to access unless we give them Public Access which is may not good for the application. If we give Public access people can access using URL through any browser without any permissions.

We are able to upload files to the Spaces also, but not able to access same file that we just uploaded. So we would like to upload/access the files with Private access through Elfinder File Manager.

Currently we are using S3 Adapter. Appreciate if any suggestions.

Thank you very much
Venu Kommu

Have another answer? Share your knowledge.