Question

How to make my spaces PUBLIC

Posted July 10, 2018 13.7k views
Configuration ManagementUbuntu 18.04

I can’t see permission option for spaces root or sub folders like FILES.
I Want to make it public as a whole.
please help

1 comment
  • It is becoming a regular basis that I find myself on one of these DO support pages, reading about an issue (AND SOLUTIONS) that have been completely ignored for years. The longer I am with DO, the more I lose faith that they even care about customers any more.

    Yet another reason to move away from DO rather than trying to embrace their services. Not being able to bulk update the permissions of a bucket, is ridiculous. This was raised 2 and a half years ago DO… WAKE UP???

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
10 answers

+1 An upload into a directory that has public ACL does not inherit such settings.

I got the same issue. How to automatically set new uploaded files as PUBLIC?

This might help everyone. I just solved the issue when uploaded file to space, by default is set to private. This is how you can change it to public. Full code, btw this is node:

const 
  s3 = require('s3'),
  GV = require('/path/to/config/file');

let client = s3.createClient({
    s3Options: {
      accessKeyId: GV.digitalOcean.spaces.key,
      secretAccessKey: GV.digitalOcean.spaces.secret,
      region: GV.digitalOcean.spaces.region,
      endpoint: GV.digitalOcean.spaces.endpoint
    },
  });

let 
        params = {
          localDir: '/local/path/file',
          deleteRemoved: true,

          s3Params: {
            Bucket: GV.digitalOcean.spaces.bucketName,
            Prefix: '/remote/path/file',
            ACL: 'public-read'  // this should set the file to Public
          },
        },
        uploader = client.uploadDir(params);

      uploader.on('error', function(err) {
        console.error("unable to sync:", err.stack);
      });

      uploader.on('progress', function() {
        console.log("progress", uploader.progressAmount, uploader.progressTotal);
      });

      uploader.on('end', function() {
        console.log('done');
      });

  • For anyone trying to do this on .net/c# with the AWSSDK.S3 library, you use the CannedACL property in the PutObjectRequest model,
    e.g.:

    var req = new PutObjectRequest()
    {
       BucketName = _bucket,
       InputStream = fileStream,
       Key = fileName,
       CannedACL = new S3CannedACL("public-read")
    };
    
    var res = await _s3Client.PutObjectAsync(req);
    

Not being able to set the default permissions in a space is a big problem. I don’t think that the code snippets shared above are a real solution.

Collaborators on my projects should be able update content in the space, without running any code. It seems like the real fix would be for DigitalOcean to implement decent permissions inheritance.

Hello friend!

This can definitely be done. I have some documentation here that can help:
https://www.digitalocean.com/docs/spaces/how-to/file-permissions/

Kind Regards,
Jarland

HI, the only way to edit multiple files permissions is to go into a folder, select all and then select ACTION dropdown, it allows you to edit permission for all selected files. However this does NOT work when you select all (select all files in this folder).

I think this needs to be fixed because it cost me time and time is money

correction it does work, you select ALL, then scroll down so more files load, then select all again. You should now be able to change permissions. Still it would be nice to have some more functionality on file and folder management, its basic at best but does the job I guess

Programatically using Python3 and boto3 I essentially use this code to upload a file and make it public:

from boto3.session import Session

def upload_file_and_make_it_public(filename: str, file_obj: BytesIO, bucket: str):
    session = Session(
            aws_access_key_id="...",
            aws_secret_access_key="...",
            region_name="...",
    )
    client = session.client(
            "s3", endpoint_url="https://...digitaloceanspaces.com", region_name="...",
    )
    client.upload_fileobj(Bucket=bucket, Key=filename, Fileobj=file_obj)
    resource = session.resource(
            "s3", endpoint_url="https://....digitaloceanspaces.com", region_name="...",
    )
    resource.Object(bucket, filename).Acl().put(ACL='public-read')
s3.putBucketPolicy({
    Bucket: process.env.BUCKET,
    Policy: JSON.stringify({
        "Version":"2012-10-17",
        "Statement":[
            {
                "Sid":"PublicRead",
                "Effect":"Allow",
                "Principal": "*",
                "Action":["s3:GetObject","s3:GetObjectVersion"],
                "Resource":[`arn:aws:s3:::${process.env.BUCKET}/*`]
            }
        ]
    })
}, (err, data) => {
    if(err) {
        console.log(err)
    } else {
        console.log(data)
    }
});

With the aws-sdk you can update the bucket policy so all files will get this when uploaded. I still can’t get it to work for the edge CDN though.