Unable to upload files to DigitalOcean Spaces using Ansible

Using the Ansible on a Ubuntu 16.04 target, I’m unable to upload files to DigitalOcean Spaces using aws_s3 module.

This is the playbook task:

    - name: Uploading backup
        aws_access_key: "..."
        aws_secret_key: "..."
        region: nyc3
        s3_url: ""
        bucket: "working-bucket"
        object: "my_file.tar.gz"
        src: "my_file.tar.gz"
        mode: put
        rgw: True

and this is the error:

   "module_stderr":"OpenSSH_7.4p1, LibreSSL 2.5.0\r\ndebug1: Reading configuration data /Users/viniciussantana/.ssh/config\r\ndebug3: kex names ok: [diffie-hellman-group1-sha1]\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 53: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 3258\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to closed.\r\n",
   "module_stdout":"Traceback (most recent call last):\r\n  File \"/tmp/ansible_3uxtv9/\", line 863, in <module>\r\n    main()\r\n  File \"/tmp/ansible_3uxtv9/\", line 772, in main\r\n    upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n  File \"/tmp/ansible_3uxtv9/\", line 469, in upload_s3file\r\n    s3.upload_file(Filename=src, Bucket=bucket, Key=obj, ExtraArgs=extra)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto3/s3/\", line 110, in upload_file\r\n    extra_args=ExtraArgs, callback=Callback)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto3/s3/\", line 283, in upload_file\r\n    filename, '/'.join([bucket, key]), e))\r\nboto3.exceptions.S3UploadFailedError: Failed to upload /home/vegbrasil/backups/dev-bra1_20171121141258.tar to working-bucket/dev-bra1_20171121141258.tar: An error occurred (InvalidRequest) when calling the PutObject operation: Unknown\r\n",
   "msg":"MODULE FAILURE",

Does anyone managed to make it work? I’m not sure if this a problem in Ansible or just something else because there’s a little of upload activity before the error happens.

(also reported in the Ansible repo)

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

I was running into this same issue myself, and was able to track down the problem. DigitalOcean Spaces does not support currently support the x-amz-server-side​-encryption header. Ansible is setting this by default. Adding encrypt: false to the put task allows it to proceed as expected.

Here is a full working example of creating a bucket, uploading a file to it, and then listing its contents:

- hosts: localhost
    spaces_access_key: "{{ lookup('env','SPACES_ACCESS_KEY') }}"
    spaces_secret_key: "{{ lookup('env','SPACES_SECRET_KEY') }}"
    spaces_region: nyc3
    space_name: my-ansible-bucket-001
    key_name: file.ext
    file_path: /path/to/my/file.ext

    - name: Creating Space
        aws_access_key: "{{ spaces_access_key }}"
        aws_secret_key: "{{ spaces_secret_key }}"
        s3_url: "{{ spaces_endpoint }}"
        bucket: "{{ space_name }}"
        mode: create
        rgw: True

    - name: Uploading object
        aws_access_key: "{{ spaces_access_key }}"
        aws_secret_key: "{{ spaces_secret_key }}"
        s3_url: "{{ spaces_endpoint }}"
        bucket: "{{ space_name }}"
        object: "{{ key_name }}"
        src: "{{ file_path }}"
        mode: put
        encrypt: false
        rgw: true

    - name: List objects in Space
        aws_access_key: "{{ spaces_access_key }}"
        aws_secret_key: "{{ spaces_secret_key }}"
        s3_url: "{{ spaces_endpoint }}"
        bucket: "{{ space_name }}"
        mode: list
        rgw: true
      register: spaces_items

    - debug:
        msg: "Contents of {{ space_name }}: {{ spaces_items.s3_keys }}"

Are you using Signature v4 for the API requests? I believe Spaces is currently not compatible with this.