DigitalOcean Spaces was designed to be inter-operable with the AWS S3 API in order allow users to continue using the tools they are already working with. In most cases, using Spaces with an existing S3 library requires configuring the endpoint value to be ${REGION}.digitaloceanspaces.com Though often how to change that setting is not well documented as examples tend to use the default AWS values. Third-party libraries tend to be better with this as they will support alternative, self-hosted object storage implementations like Mino or Ceph.

In the answers, let’s share some basic examples of working with Spaces using the AWS SDKs in various languages.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
15 answers

PHP - (AWS docs)

<?php

// Included aws/aws-sdk-php via Composer's autoloader
// Installed with: composer.phar require aws/aws-sdk-php
require 'vendor/autoload.php';
use Aws\S3\S3Client;

// Configure a client using Spaces
$client = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => 'nyc3',
        'endpoint' => 'https://nyc3.digitaloceanspaces.com',
        'credentials' => [
                'key'    => 'ACCESS_KEY',
                'secret' => 'SECRET_KEY',
            ],
]);

// Create a new Space
$client->createBucket([
    'Bucket' => 'my-new-space-with-a-unique-name',
]);

// Listing all Spaces in the region
$spaces = $client->listBuckets();
foreach ($spaces['Buckets'] as $space){
    echo $space['Name']."\n";
}


// Upload a file to the Space
$insert = $client->putObject([
     'Bucket' => 'my-new-space-with-a-unique-name',
     'Key'    => 'file.ext',
     'Body'   => 'The contents of the file'
]);
  • Could you please include one for java also? Thanks.

  • Where can we found more examples with PHP?

  • Thx for the code. Would love a php example of a multipart upload!

  • ok using the above example, I am able to connect to the api and I am able to set the CORS rules. However when I go to create a space this is the error I get. Can anyone help. This is the code I am using in php

    <?php
    
    require_once("vendor/autoload.php");
    use Aws\S3\S3Client;
    
    // Configure a client using Spaces
    $client = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => $rows['region'],
        'endpoint' => 'https://'.$rows['region'].'.digitaloceanspaces.com',
        'credentials' => [
                'key'    => $rows['accesskey'],
                'secret' => $rows['secretkey'],
            ],
    ]);
    
    // Create a new Space
    $client->createBucket(["Bucket" => $rows['bucket_name']]);
    
    $result = $client->putBucketCors([
        "Bucket" => $rows['bucket_name'], // REQUIRED
        "CORSConfiguration" => [ // REQUIRED
            "CORSRules" => [ // REQUIRED
                [
                    "AllowedHeaders" => ["*"],
                    "AllowedMethods" => ["HEAD", "POST", "GET"], // REQUIRED
                    "AllowedOrigins" => ["*"], // REQUIRED
                    "MaxAgeSeconds" => 3000,
                ],
                // ...
            ],
        ],
    ]);
    
    PHP Fatal error:  Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "CreateBucket" on "https://foundydemo6.sfo2.digitaloceanspaces.com/"; AWS HTTP error: Client error: `PUT https://foundydemo6.sfo2.digitaloceanspaces.com/` resulted in a `400 Bad Request` response:
    <?xml version="1.0" encoding="UTF-8"?><Error><Code>XAmzContentSHA256Mismatch</Code><BucketName>foundydemo6</BucketName>< (truncated...)
     XAmzContentSHA256Mismatch (client):  - <?xml version="1.0" encoding="UTF-8"?><Error><Code>XAmzContentSHA256Mismatch</Code><BucketName>foundydemo6</BucketName><RequestId>tx000000000000012af771e-005ef51700-90781b-sfo2a</RequestId><HostId>90781b-sfo2a-sfo</HostId></Error>'
    
    GuzzleHttp\Exception\ClientException: Client error: `PUT https://foundydemo6.sfo2.digitaloceanspaces.com/` resulted in a `400 Bad Request` response:
    <?xml version="1.0" encoding="UTF-8"?><Error><Code>XAmzContentSHA256Mismatch</Code><BucketName>foundydemo6</BucketName>< (truncated...)
     in /vendor/guzzlehttp/guzzle in /vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php on line 195
    

JavaScript - (AWS docs)

const AWS = require('aws-sdk')

// Configure client for use with Spaces
const spacesEndpoint = new AWS.Endpoint('nyc3.digitaloceanspaces.com');
const s3 = new AWS.S3({
    endpoint: spacesEndpoint,
    accessKeyId: 'ACCESS_KEY',
    secretAccessKey: 'SECRET_KEY'
});


// Create a new Space
var params = {
    Bucket: "my-new-space-with-a-unique-name"
};

s3.createBucket(params, function(err, data) {
    if (err) console.log(err, err.stack);
    else     console.log(data);
});

// List all Spaces in the region
s3.listBuckets({}, function(err, data) {
    if (err) console.log(err, err.stack);
        else {
            data['Buckets'].forEach(function(space) {
            console.log(space['Name']);
        })};
    });

// Add a file to a Space
var params = {
    Body: "The contents of the file",
    Bucket: "my-new-space-with-a-unique-name",
    Key: "file.ext",
};

s3.putObject(params, function(err, data) {
    if (err) console.log(err, err.stack);
    else     console.log(data);
});
  • Also, setting custom headers since some of them are required:

          await S3.putObject(params)
            .on('build', request => {
              request.httpRequest.headers.Host = `https://${BUCKET}.${REGION}.digitaloceanspaces.com`;
              request.httpRequest.headers['Content-Length'] = file.size;
              request.httpRequest.headers['Content-Type'] = file.mimetype;
              request.httpRequest.headers['x-amz-acl'] = 'public-read';
            })
            .send((err, data) => {
              if (err) console.log(err, err.stack);
              else console.log(data);
            });
    
  • I don’t know if it was just my specific setup, but this did not work for me. I got a series of cryptic messages, which eventually boiled down to the keys not being passed in correctly and aws-sdk looking for ~/.aws/credentials

    This worked for me:

      const s3 = new AWS.S3({
        endpoint: spacesEndpoint,
        credentials: new AWS.Credentials({
          accessKeyId: 'ACCESS_KEY',
          secretAccessKey: 'SECRET_KEY'
        })
      })
    
    • Okay, looks like both methods actually work. However, if you pass the key values directly to the constructor and they are invalid you will get a horrible error message that tries it’s hardest to look like some kind of network problem. If you pass a Credentials object to the S3 constructor and they are invalid keys then it will give you a much more sensible “Missing credentials” message.

  • I’m trying to create bucket and I’m receiving a not explanatory error response (InvalidRequest: Malformed request), any ideas?

    { InvalidRequest: Malformed request
        at Request.extractError (/var/nodejs/storage/node_modules/aws-sdk/lib/services/s3.js:577:35)
        at Request.callListeners (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
        at Request.emit (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
        at Request.emit (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:683:14)
        at Request.transition (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:22:10)
        at AcceptorStateMachine.runTo (/var/nodejs/storage/node_modules/aws-sdk/lib/state_machine.js:14:12)
        at /var/nodejs/storage/node_modules/aws-sdk/lib/state_machine.js:26:10
        at Request.<anonymous> (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:38:9)
        at Request.<anonymous> (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:685:12)
        at Request.callListeners (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
      message: 'Malformed request',
      code: 'InvalidRequest',
      region: null,
      time: 2018-05-03T22:16:36.740Z,
      requestId: null,
      extendedRequestId: undefined,
      cfId: undefined,
      statusCode: 400,
      retryable: false,
      retryDelay: 52.89127599113917 }
    
  • Hi there! Somebody knows how to acces private files from api?
    I’m working with Nodejs/express server.

Working C# example using Amazon AWSSDK (V2.x).

IAmazonS3 amazonS3Client = 
    AWSClientFactory.CreateAmazonS3Client("your-spaces-key", "your-spaces-key-secrete",
    new AmazonS3Config
    {
       ServiceURL = "https://nyc3.digitaloceanspaces.com"
    }
);
var myBuckets = amazonS3Client.ListBuckets();

c# asp.net core 2.1 - list objects

(I use this to verify my automated backup to spaces from an internal company asp.net core web application)

Add Nuget package:
https://www.nuget.org/packages/AWSSDK.S3/

dotnet add package AWSSDK.S3 

Add references:

using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.S3;

Declare constants for authentication:

private const string S3_SECRET_KEY = "your-secret-key-value";
private const string S3_ACCESS_KEY = "your-access-key-value";
private const string S3_HOST_ENDPOINT = "https://nyc3.digitaloceanspaces.com";
private const string S3_BUCKET_NAME = "your-bucket-name-here";

Sample method to fetch all filenames stored in Spaces bucket:

 public static List<string> GetFileListFromSpacesBackupStorage()
        {
            AmazonS3Config ClientConfig = new AmazonS3Config();
            ClientConfig.ServiceURL = S3_HOST_ENDPOINT;
            IAmazonS3 s3Client = new AmazonS3Client(S3_ACCESS_KEY, S3_SECRET_KEY, ClientConfig);
            var ObjectList = s3Client.ListObjectsAsync(S3_BUCKET_NAME).Result;
            var FileList = ObjectList.S3Objects.Select(c => c.Key).ToList();
            return FileList;
        }
  • Hi, thanks for the solution. That eased the process up significantly.

    I am, however, experiencing problems when trying to access private files specifically with a whitespace in the filename. They have been successfully uploaded using Minio.

    Has anyone experienced this and maybe even found a solution to this?

  • Hi! Above code indeed worked to get a list of files (objects) from a bucket. However, when I try to get (download) a file from a bucket’s folder I receive: Error Code SignatureDoesNotMatch and Http Status Code Forbidden. Signature is calculated by AWS SDK, so I can’t really do nothing here. DO Spaces is not so compatible with AWS S3 as they claim!!!

    foreach (S3Object entry in list)
    {
        GetObjectRequest request = new GetObjectRequest
        {
            BucketName = "My Bucket Name",
            Key = "My Folder/my_filename.txt"
        };
    
        var response = await s3Client.GetObjectAsync(request);
    }
    

Ruby - (AWS docs)

require 'aws-sdk-s3'

# Configure client for use with Spaces
client = Aws::S3::Client.new(
  access_key_id: 'ACCESS_KEY',
  secret_access_key: 'SECRET_KEY',
  endpoint: 'https://nyc3.digitaloceanspaces.com',
  region: 'nyc3'
)

# Create a new Space
client.create_bucket({
  bucket: "my-new-space-with-a-unique-name",
  acl: "private"
})

# List all Spaces
spaces =  client.list_buckets()
spaces.buckets.each do |space|
  puts "#{space.name}"
end

# Add a file to a Space
client.put_object({
  body: "The contents of the file",
  bucket: "my-new-space-with-a-unique-name",
  key: "file-name.txt"
})

Go - (AWS Docs)

package main

import (
    "fmt"
    "strings"

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/credentials"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/s3"
)

func main() {
    // Initialize a client using Spaces
    s3Config := &aws.Config{
        Credentials: credentials.NewStaticCredentials("ACCESS_KEY", "SECRET_KEY", ""),
        Endpoint:    aws.String("https://nyc3.digitaloceanspaces.com"),
        Region:      aws.String("us-east-1"), // This is counter intuitive, but it will fail with a non-AWS region name.
    }

    newSession := session.New(s3Config)
    s3Client := s3.New(newSession)

    // Create a new Space
    params := &s3.CreateBucketInput{
        Bucket: aws.String("my-new-space-with-a-unique-name"),
    }

    _, err := s3Client.CreateBucket(params)
    if err != nil {
        fmt.Println(err.Error())
        return
    }

    // List all Spaces in the region
    spaces, err := s3Client.ListBuckets(nil)
    if err != nil {
        fmt.Println(err.Error())
        return
    }

    for _, b := range spaces.Buckets {
        fmt.Printf("%s\n", aws.StringValue(b.Name))
    }

    // Upload a file to the Space
    object := s3.PutObjectInput{
        Body:   strings.NewReader("The contents of the file"),
        Bucket: aws.String("my-new-space-with-a-unique-name"),
        Key:    aws.String("file.ext"),
    }
    _, err = s3Client.PutObject(&object)
    if err != nil {
        fmt.Println(err.Error())
        return
    }
}

Kotlin (& Java) - [SDK]

You only need to setup “withEndpointConfiguration” correctly and then you can use others API normally, example here or in Amazon official document

var s3Client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(AwsClientBuilder.EndpointConfiguration("https://sgp1.digitaloceanspaces.com", "sgp1"))
.withCredentials(AWSStaticCredentialsProvider(BasicAWSCredentials("key", "secret")))
                .build()

s3Client.listObjects("bucketName")

Python 3 - (Boto Docs)

import boto3

# Initialize a session using Spaces
session = boto3.session.Session()
client = session.client('s3',
                        region_name='nyc3',
                        endpoint_url='https://nyc3.digitaloceanspaces.com',
                        aws_access_key_id='ACCESS_KEY',
                        aws_secret_access_key='SECRET_KEY')

# Create a new Space
client.create_bucket(Bucket='my-new-space-with-a-unique-name')

# List all Spaces in the region
response = client.list_buckets()
for s in [space['Name'] for space in response['Buckets']]:
    print(s)

# Add a file to a Space
client.upload_file('/path/to/file.ext',
                   'my-new-space-with-a-unique-name',
                   'file.ext')

Hi,
We are trying to access the Private files from Spaces using Elfinder File Manager. But not able to access when it is having Private Access. So we are not able to access unless we give them Public Access which is may not good for the application. If we give Public access people can access using URL through any browser without any permissions.

We are able to upload files to the Spaces also, but not able to access same file that we just uploaded. So we would like to upload/access the files with Private access through Elfinder File Manager.

Currently we are using S3 Adapter. Appreciate if any suggestions.

Thank you very much
Venu Kommu

with swift ? also should i need to INFO.list update if yes do have one EXAMPLE, please share?

Try to read this article.

Send Image from Java to DigitalOcean Space (Bucket) using AWS SDK

It’s exactly what you’re looking for.

swift, iOS

...
var transferUtility: AWSS3TransferUtility!
...
override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        ...
        //Setup credentials
        let credentialsProvider = AWSStaticCredentialsProvider(accessKey:"ACCESS_KEY", secretKey: "SECRET_KEY")

        //Setup region
        let endpoint = AWSEndpoint(urlString: "https://nyc3.digitaloceanspaces.com")

        //Setup the service configuration
        let configuration = AWSServiceConfiguration(region: .USEast1, endpoint: endpoint, credentialsProvider: credentialsProvider)

       //Setup the transfer utility configuration
        let tuConf = AWSS3TransferUtilityConfiguration()
        tuConf.isAccelerateModeEnabled = false
        tuConf.bucket = S3BucketName

        //Register a transfer utility object
        AWSS3TransferUtility.register(
            with: configuration!,
            transferUtilityConfiguration: tuConf,
            forKey: "transfer-utility-with-advanced-options"
        )

        //Look up the transfer utility object from the registry to use for your transfers.
        transferUtility = AWSS3TransferUtility.s3TransferUtility(forKey: "transfer-utility-with-advanced-options")
...
}

‘VB.NET Upload files to Digitalocean Spaces with S3 TransferUtility

Try

s3Client = New AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, New AmazonS3Config With {
                        .SignatureVersion = 2,
                        .ServiceURL = "https://sfo2.digitaloceanspaces.com"})

Dim directoryTransferUtility = New TransferUtility(s3Client)
                Dim uploadRequest = New TransferUtilityUploadDirectoryRequest With {
                                .BucketName = spaceName,
                                .KeyPrefix = "folder/",
                                .Directory = "Your Local Directory"
                            }

AddHandler uploadRequest.UploadDirectoryProgressEvent, New EventHandler(Of UploadDirectoryProgressArgs)(AddressOf UploadFile_ProgressBar)
Await directoryTransferUtility.UploadDirectoryAsync(uploadRequest)

Catch ex As AmazonS3Exception
MsgBox("Error encountered on server. Message:'{0}' when writing an object", ex.Message.ToString)
Catch ex As Exception
MsgBox("Unknown encountered on server. Message:'{0}' when writing an object", ex.Message.ToString)

End Try

edited by asb

How do you do this in Swift the above solution works but how do you upload?

Scala & Spark. Configuring Spark with hadoop-aws library (includes official AWS SDK) with usage samples below. There are also other ways of passing credentials described in official Hadoop documentation.

// Configuring Spark Session
val hadoopConf = sparkSession.sparkContext.hadoopConfiguration
hadoopConf.set("fs.s3a.access.key", "<YOUR_ACCESS_KEY>")
hadoopConf.set("fs.s3a.secret.key", "<YOUR_SECRET_KEY>")
hadoopConf.set("fs.s3a.endpoint", "https://<REGION>.digitaloceanspaces.com") // example: "https://nyc3.digitaloceanspaces.com

// Loading csv from Spaces to DataFrame:
sparkSession.read
      .format("csv")
      .option("header", true)
      .load("s3a://<YOUR_BUCKET>/<YOUR_INPUT_DIR>")
      .show()

// Writing DataFrame to csv:
sampleDataFrame.write
        .format("csv")
        .save("s3a://<YOUR_BUCKET>/<YOUR_OUTPUT_DIR>")
Submit an Answer