So I’m building this platform that will allow file uploads and I will be using Digital Oceans Spaces because it’s more affordable for my project. I encountered 2 problems so far when uploading files with presigned url. One is independent from the other. Here’s the code that generates the url:

$spaces = \Storage::disk('spaces');
$client = $spaces->getDriver()->getAdapter()->getClient();
$expiry = "+60 minutes";
$filename = time() .'.jpeg';
$command = $client->getCommand('PutObject', [
   'Bucket' => 'jgb',
   'Key'            => "images/$filename",
   'ACL'            => 'public-read',
   'ContentType' => 'application/octet-stream',
]);

$request = $client->createPresignedRequest($command, $expiry);

return (string) $request->getUri();

and it works fine for file uploads using Postman except that I’m getting 2 problems:

  • I can’t get the content type for files for the files and I get a weird type such as
multipart/form-data; boundary=--------------------------532068335149033986173029

when testing with Postman.

  • I am getting “SignatureDoesNotMatch” error when when presigned url is generated with ACL set to “public-read” to allow access to everyone.

I’ve googled around and couldn’t find anything related to that. I’m thinking maybe Digital Ocean Spaces just lacks some of the features of S3.

Any help is welcome. Thanks!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

Hi, Spaces Engineering team member here.

This looks mostly like the AWS PHP SDK v3 you’re using, please do correct me if I’m wrong.

Based on the multipart/form-data string, I think you might be doing a request with the POST HTTP method rather than a PUT HTTP method.

If this is the case, you need to either do a PUT method for the actual upload (be sure to specify the headers & fields per your signed request), or instead generate a signed POST request to use.

The public-read you have specified is a constraint on what permissions should be present on the object AFTER it’s uploaded. Similarly for the content-type.

The SDK does provide explicit methods to ensure you get this correct:

  • $request->getMethod();
  • $request->getHeaders();

If you read the URI string closely, you should see a parameter that shows which headers were signed. Those will need to be preserved if any.

@robbat2 thanks for answering personally.

DO spaces behaviour is not consistent with aws behaviour.

This is my typical snippet to get a presigned url with aws-sdk:

    const url = await s3.getSignedUrl('putObject', {
        Bucket: process.env.S3_BUCKET,
        Expires: Number(process.env.S3_URL_TTL),
        Key: fileKey,
        // CacheControl: 'max-age=31536000',
        ACL: isPublic ? 'public-read' : undefined,
        Metadata: metadata,
    });

I can then PUT my file to the url generated without the need to send the metadata headers (x-amz-meta-* and x-amz-meta-acl) signed.
This is a great advantage:

  • not having to carry them around
  • be granted that metadata set in the putObject are freezed
  • in an API context, not needing to explain the front-end developers they have to send various headers and stuff

Do spaces on the other side complains with a 403 forbidden (invalid signature) because it requires passing those headers when making the PUT request.

Any chance to avoid this and replicate aws behaviour?

Submit an Answer