Can't use Spaces via Amazon S3 SDK

Posted February 10, 2018 4k views

I’m trying to perform a PUT operation towards Spaces using AWS S3 Java SDK, but I keep getting the same error.

Here’s the definition of the client:

    public AmazonS3 spaces() {
        BasicAWSCredentials credentials = new BasicAWSCredentials(...);
        return AmazonS3ClientBuilder.standard().
                withCredentials(new AWSStaticCredentialsProvider(credentials)).
                withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("", "ams3")).

Here’s usage:

spaces.putObject("bucketname", fullPath, tempFile);

And the error is: Bad request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad request; Request ID: null; S3 Extended Request ID: null)

SDK version:


I tried to replace AwsClientBuilder.EndpointConfiguration("", "ams3") with AwsClientBuilder.EndpointConfiguration("", "ams3"), but it also didn’t work.

And when I try to GET the file, I surprisingly get a different error: null (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: ...; S3 Extended Request ID: ...)

Although I’m pretty sure that I’ve specified key and secret correctly: I double-checked it.

1 comment
  • I found that if I listObject for the bucket before putObject, it works. I have been trying to get pre-signed url to upload all day. I am giving up on this. This product is incomplete and/or super buggy. Its not s3 compatible as they claim.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

4 answers

I am facing the same issue with AWS SDK: 1.11.528

(surprisingly after almost one and half years this is still not resolved)

Maybe a better strategy is not to claim DO spaces are S3 compatible.

  • Eventually, I’ve given up using DO and switched completely to AWS.

    • For what’s worth, the root cause is because my destination file full contains a ’$’, which needs to be explicitly url escaped. However the error message is extremely unclear.

      Another issue I had with DO spaces are the bulk delete performance are disappointing.

      In general it’s not a product as great as other DO services.

      • I am experiencing the same issue when accessing files using whitespace (and therefore also becoming url encoded (by the AWS SDK). This issue seems to only appear for private objects since the URL given to the file through the DO Dashboard are identical to what AWS SDK attempts to access.

I am facing the same problem, Did anyone resolve this or Digital Ocean is not sure about the solution?

I know that the question is old but I faced the same issue and RESOLVED it. In my case, the problem was on the wrong date. The date was set up a few days in the future and I had the issue. After I put the correct date/time everything works fine :-)

For me it is working fine in Java:

    private static final String DO_ACCESS_KEY = "some_access_key";
    private static final String DO_SECRET_KEY = "some_secret_key";
    private static final String BUCKET_ENDPOINT = "";
    private static final String BUCKET_REGION = "fra1";
    private static final String BUCKET_NAME = "my-bucket-name";

    public String uploadImageToStorage(byte[] byteimage) {
        AWSCredentialsProvider awscp = new AWSStaticCredentialsProvider(
                new BasicAWSCredentials(DO_ACCESS_KEY, DO_SECRET_KEY));

        AmazonS3 space = AmazonS3ClientBuilder
                        new AwsClientBuilder.EndpointConfiguration(BUCKET_ENDPOINT, BUCKET_REGION))

        InputStream is = new ByteArrayInputStream(byteimage);

        ObjectMetadata om = new ObjectMetadata();

        String filepath = "images/users/test/testfile.jpg";
        space.putObject(BUCKET_NAME, filepath, is, om);

        return space.getUrl(BUCKET_NAME, filepath).toString();
Submit an Answer