DigitalOcean Spaces was designed to be inter-operable with the AWS S3 API in order allow users to continue using the tools they are already working with. In most cases, using Spaces with an existing S3 library requires configuring the endpoint value to be ${REGION} Though often how to change that setting is not well documented as examples tend to use the default AWS values. Third-party libraries tend to be better with this as they will support alternative, self-hosted object storage implementations like Mino or Ceph.

In the answers, let’s share some basic examples of working with Spaces using the AWS SDKs in various languages.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

16 answers

PHP - (AWS docs)


// Included aws/aws-sdk-php via Composer's autoloader
// Installed with: composer.phar require aws/aws-sdk-php
require 'vendor/autoload.php';
use Aws\S3\S3Client;

// Configure a client using Spaces
$client = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => 'nyc3',
        'endpoint' => '',
        'credentials' => [
                'key'    => 'ACCESS_KEY',
                'secret' => 'SECRET_KEY',

// Create a new Space
    'Bucket' => 'my-new-space-with-a-unique-name',

// Listing all Spaces in the region
$spaces = $client->listBuckets();
foreach ($spaces['Buckets'] as $space){
    echo $space['Name']."\n";

// Upload a file to the Space
$insert = $client->putObject([
     'Bucket' => 'my-new-space-with-a-unique-name',
     'Key'    => 'file.ext',
     'Body'   => 'The contents of the file'
  • Could you please include one for java also? Thanks.

  • Where can we found more examples with PHP?

  • Thx for the code. Would love a php example of a multipart upload!

  • ok using the above example, I am able to connect to the api and I am able to set the CORS rules. However when I go to create a space this is the error I get. Can anyone help. This is the code I am using in php

    use Aws\S3\S3Client;
    // Configure a client using Spaces
    $client = new Aws\S3\S3Client([
        'version' => 'latest',
        'region'  => $rows['region'],
        'endpoint' => 'https://'.$rows['region'].'',
        'credentials' => [
                'key'    => $rows['accesskey'],
                'secret' => $rows['secretkey'],
    // Create a new Space
    $client->createBucket(["Bucket" => $rows['bucket_name']]);
    $result = $client->putBucketCors([
        "Bucket" => $rows['bucket_name'], // REQUIRED
        "CORSConfiguration" => [ // REQUIRED
            "CORSRules" => [ // REQUIRED
                    "AllowedHeaders" => ["*"],
                    "AllowedMethods" => ["HEAD", "POST", "GET"], // REQUIRED
                    "AllowedOrigins" => ["*"], // REQUIRED
                    "MaxAgeSeconds" => 3000,
                // ...
    PHP Fatal error:  Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "CreateBucket" on ""; AWS HTTP error: Client error: `PUT` resulted in a `400 Bad Request` response:
    <?xml version="1.0" encoding="UTF-8"?><Error><Code>XAmzContentSHA256Mismatch</Code><BucketName>foundydemo6</BucketName>< (truncated...)
     XAmzContentSHA256Mismatch (client):  - <?xml version="1.0" encoding="UTF-8"?><Error><Code>XAmzContentSHA256Mismatch</Code><BucketName>foundydemo6</BucketName><RequestId>tx000000000000012af771e-005ef51700-90781b-sfo2a</RequestId><HostId>90781b-sfo2a-sfo</HostId></Error>'
    GuzzleHttp\Exception\ClientException: Client error: `PUT` resulted in a `400 Bad Request` response:
    <?xml version="1.0" encoding="UTF-8"?><Error><Code>XAmzContentSHA256Mismatch</Code><BucketName>foundydemo6</BucketName>< (truncated...)
     in /vendor/guzzlehttp/guzzle in /vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php on line 195

JavaScript - (AWS docs)

const AWS = require('aws-sdk')

// Configure client for use with Spaces
const spacesEndpoint = new AWS.Endpoint('');
const s3 = new AWS.S3({
    endpoint: spacesEndpoint,
    accessKeyId: 'ACCESS_KEY',
    secretAccessKey: 'SECRET_KEY'

// Create a new Space
var params = {
    Bucket: "my-new-space-with-a-unique-name"

s3.createBucket(params, function(err, data) {
    if (err) console.log(err, err.stack);
    else     console.log(data);

// List all Spaces in the region
s3.listBuckets({}, function(err, data) {
    if (err) console.log(err, err.stack);
        else {
            data['Buckets'].forEach(function(space) {

// Add a file to a Space
var params = {
    Body: "The contents of the file",
    Bucket: "my-new-space-with-a-unique-name",
    Key: "file.ext",

s3.putObject(params, function(err, data) {
    if (err) console.log(err, err.stack);
    else     console.log(data);
  • Also, setting custom headers since some of them are required:

          await S3.putObject(params)
            .on('build', request => {
              request.httpRequest.headers.Host = `https://${BUCKET}.${REGION}`;
              request.httpRequest.headers['Content-Length'] = file.size;
              request.httpRequest.headers['Content-Type'] = file.mimetype;
              request.httpRequest.headers['x-amz-acl'] = 'public-read';
            .send((err, data) => {
              if (err) console.log(err, err.stack);
              else console.log(data);
  • I don’t know if it was just my specific setup, but this did not work for me. I got a series of cryptic messages, which eventually boiled down to the keys not being passed in correctly and aws-sdk looking for ~/.aws/credentials

    This worked for me:

      const s3 = new AWS.S3({
        endpoint: spacesEndpoint,
        credentials: new AWS.Credentials({
          accessKeyId: 'ACCESS_KEY',
          secretAccessKey: 'SECRET_KEY'
    • Okay, looks like both methods actually work. However, if you pass the key values directly to the constructor and they are invalid you will get a horrible error message that tries it’s hardest to look like some kind of network problem. If you pass a Credentials object to the S3 constructor and they are invalid keys then it will give you a much more sensible “Missing credentials” message.

  • I’m trying to create bucket and I’m receiving a not explanatory error response (InvalidRequest: Malformed request), any ideas?

    { InvalidRequest: Malformed request
        at Request.extractError (/var/nodejs/storage/node_modules/aws-sdk/lib/services/s3.js:577:35)
        at Request.callListeners (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
        at Request.emit (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
        at Request.emit (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:683:14)
        at Request.transition (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:22:10)
        at AcceptorStateMachine.runTo (/var/nodejs/storage/node_modules/aws-sdk/lib/state_machine.js:14:12)
        at /var/nodejs/storage/node_modules/aws-sdk/lib/state_machine.js:26:10
        at Request.<anonymous> (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:38:9)
        at Request.<anonymous> (/var/nodejs/storage/node_modules/aws-sdk/lib/request.js:685:12)
        at Request.callListeners (/var/nodejs/storage/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
      message: 'Malformed request',
      code: 'InvalidRequest',
      region: null,
      time: 2018-05-03T22:16:36.740Z,
      requestId: null,
      extendedRequestId: undefined,
      cfId: undefined,
      statusCode: 400,
      retryable: false,
      retryDelay: 52.89127599113917 }
  • Hi there! Somebody knows how to acces private files from api?
    I’m working with Nodejs/express server.

Working C# example using Amazon AWSSDK (V2.x).

IAmazonS3 amazonS3Client = 
    AWSClientFactory.CreateAmazonS3Client("your-spaces-key", "your-spaces-key-secrete",
    new AmazonS3Config
       ServiceURL = ""
var myBuckets = amazonS3Client.ListBuckets();

c# core 2.1 - list objects

(I use this to verify my automated backup to spaces from an internal company core web application)

Add Nuget package:

dotnet add package AWSSDK.S3 

Add references:

using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.S3;

Declare constants for authentication:

private const string S3_SECRET_KEY = "your-secret-key-value";
private const string S3_ACCESS_KEY = "your-access-key-value";
private const string S3_HOST_ENDPOINT = "";
private const string S3_BUCKET_NAME = "your-bucket-name-here";

Sample method to fetch all filenames stored in Spaces bucket:

 public static List<string> GetFileListFromSpacesBackupStorage()
            AmazonS3Config ClientConfig = new AmazonS3Config();
            ClientConfig.ServiceURL = S3_HOST_ENDPOINT;
            IAmazonS3 s3Client = new AmazonS3Client(S3_ACCESS_KEY, S3_SECRET_KEY, ClientConfig);
            var ObjectList = s3Client.ListObjectsAsync(S3_BUCKET_NAME).Result;
            var FileList = ObjectList.S3Objects.Select(c => c.Key).ToList();
            return FileList;
  • Hi, thanks for the solution. That eased the process up significantly.

    I am, however, experiencing problems when trying to access private files specifically with a whitespace in the filename. They have been successfully uploaded using Minio.

    Has anyone experienced this and maybe even found a solution to this?

  • Hi! Above code indeed worked to get a list of files (objects) from a bucket. However, when I try to get (download) a file from a bucket’s folder I receive: Error Code SignatureDoesNotMatch and Http Status Code Forbidden. Signature is calculated by AWS SDK, so I can’t really do nothing here. DO Spaces is not so compatible with AWS S3 as they claim!!!

    foreach (S3Object entry in list)
        GetObjectRequest request = new GetObjectRequest
            BucketName = "My Bucket Name",
            Key = "My Folder/my_filename.txt"
        var response = await s3Client.GetObjectAsync(request);

Ruby - (AWS docs)

require 'aws-sdk-s3'

# Configure client for use with Spaces
client =
  access_key_id: 'ACCESS_KEY',
  secret_access_key: 'SECRET_KEY',
  endpoint: '',
  region: 'nyc3'

# Create a new Space
  bucket: "my-new-space-with-a-unique-name",
  acl: "private"

# List all Spaces
spaces =  client.list_buckets()
spaces.buckets.each do |space|
  puts "#{}"

# Add a file to a Space
  body: "The contents of the file",
  bucket: "my-new-space-with-a-unique-name",
  key: "file-name.txt"

Kotlin (& Java) - [SDK]

You only need to setup “withEndpointConfiguration” correctly and then you can use others API normally, example here or in Amazon official document

var s3Client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(AwsClientBuilder.EndpointConfiguration("", "sgp1"))
.withCredentials(AWSStaticCredentialsProvider(BasicAWSCredentials("key", "secret")))


Go - (AWS Docs)

package main

import (


func main() {
    // Initialize a client using Spaces
    s3Config := &aws.Config{
        Credentials: credentials.NewStaticCredentials("ACCESS_KEY", "SECRET_KEY", ""),
        Endpoint:    aws.String(""),
        Region:      aws.String("us-east-1"), // This is counter intuitive, but it will fail with a non-AWS region name.

    newSession := session.New(s3Config)
    s3Client := s3.New(newSession)

    // Create a new Space
    params := &s3.CreateBucketInput{
        Bucket: aws.String("my-new-space-with-a-unique-name"),

    _, err := s3Client.CreateBucket(params)
    if err != nil {

    // List all Spaces in the region
    spaces, err := s3Client.ListBuckets(nil)
    if err != nil {

    for _, b := range spaces.Buckets {
        fmt.Printf("%s\n", aws.StringValue(b.Name))

    // Upload a file to the Space
    object := s3.PutObjectInput{
        Body:   strings.NewReader("The contents of the file"),
        Bucket: aws.String("my-new-space-with-a-unique-name"),
        Key:    aws.String("file.ext"),
    _, err = s3Client.PutObject(&object)
    if err != nil {

Python 3 - (Boto Docs)

import boto3

# Initialize a session using Spaces
session = boto3.session.Session()
client = session.client('s3',

# Create a new Space

# List all Spaces in the region
response = client.list_buckets()
for s in [space['Name'] for space in response['Buckets']]:

# Add a file to a Space

We are trying to access the Private files from Spaces using Elfinder File Manager. But not able to access when it is having Private Access. So we are not able to access unless we give them Public Access which is may not good for the application. If we give Public access people can access using URL through any browser without any permissions.

We are able to upload files to the Spaces also, but not able to access same file that we just uploaded. So we would like to upload/access the files with Private access through Elfinder File Manager.

Currently we are using S3 Adapter. Appreciate if any suggestions.

Thank you very much
Venu Kommu

with swift ? also should i need to INFO.list update if yes do have one EXAMPLE, please share?

Try to read this article.

Send Image from Java to DigitalOcean Space (Bucket) using AWS SDK

It’s exactly what you’re looking for.

swift, iOS

var transferUtility: AWSS3TransferUtility!
override func viewDidLoad() {
        // Do any additional setup after loading the view, typically from a nib.
        //Setup credentials
        let credentialsProvider = AWSStaticCredentialsProvider(accessKey:"ACCESS_KEY", secretKey: "SECRET_KEY")

        //Setup region
        let endpoint = AWSEndpoint(urlString: "")

        //Setup the service configuration
        let configuration = AWSServiceConfiguration(region: .USEast1, endpoint: endpoint, credentialsProvider: credentialsProvider)

       //Setup the transfer utility configuration
        let tuConf = AWSS3TransferUtilityConfiguration()
        tuConf.isAccelerateModeEnabled = false
        tuConf.bucket = S3BucketName

        //Register a transfer utility object
            with: configuration!,
            transferUtilityConfiguration: tuConf,
            forKey: "transfer-utility-with-advanced-options"

        //Look up the transfer utility object from the registry to use for your transfers.
        transferUtility = AWSS3TransferUtility.s3TransferUtility(forKey: "transfer-utility-with-advanced-options")

‘VB.NET Upload files to Digitalocean Spaces with S3 TransferUtility


s3Client = New AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, New AmazonS3Config With {
                        .SignatureVersion = 2,
                        .ServiceURL = ""})

Dim directoryTransferUtility = New TransferUtility(s3Client)
                Dim uploadRequest = New TransferUtilityUploadDirectoryRequest With {
                                .BucketName = spaceName,
                                .KeyPrefix = "folder/",
                                .Directory = "Your Local Directory"

AddHandler uploadRequest.UploadDirectoryProgressEvent, New EventHandler(Of UploadDirectoryProgressArgs)(AddressOf UploadFile_ProgressBar)
Await directoryTransferUtility.UploadDirectoryAsync(uploadRequest)

Catch ex As AmazonS3Exception
MsgBox("Error encountered on server. Message:'{0}' when writing an object", ex.Message.ToString)
Catch ex As Exception
MsgBox("Unknown encountered on server. Message:'{0}' when writing an object", ex.Message.ToString)

End Try

edited by asb

How do you do this in Swift the above solution works but how do you upload?

Scala & Spark. Configuring Spark with hadoop-aws library (includes official AWS SDK) with usage samples below. There are also other ways of passing credentials described in official Hadoop documentation.

// Configuring Spark Session
val hadoopConf = sparkSession.sparkContext.hadoopConfiguration
hadoopConf.set("fs.s3a.access.key", "<YOUR_ACCESS_KEY>")
hadoopConf.set("fs.s3a.secret.key", "<YOUR_SECRET_KEY>")
hadoopConf.set("fs.s3a.endpoint", "https://<REGION>") // example: "

// Loading csv from Spaces to DataFrame:
      .option("header", true)

// Writing DataFrame to csv:

Here is what worked for me to serve a video from spaces using the amazon sdk for javascript while using Angular and Node.js.

1) In Digital Ocean console, in API->Spaces access keys create new set of keys;
2) In Node.js backend, use npm module ‘aws-sdk’ to generate a client:

const spacesEndpoint = new AWS.Endpoint('');
          const s3 = new AWS.S3({
              // signatureVersion: `s3v2`,
              region: `fra1`,
              endpoint: spacesEndpoint,
              accessKeyId: <key>,
              secretAccessKey: <secret>

          var params = {
            Bucket: "portugal", 
            Key: "test.mp4"
          }; = s3.getSignedUrl('getObject', params);

on every request, the client generates a signed url that is then passed on to the frontendl
3) in the frontend that link can be used to serve the video.


a) the region seems to be important, and it has to be the same as the one the space is located in;
b) the bucket name is not the same as its address;
c) when spaces speak about “key”, they are actually speaking about file. “nosuchkey” really just means that the file was not found.

They are pretty obvious, but wasted 1 hour because of them.

Submit an Answer