Question

Unable to upload files to DigitalOcean Spaces using Ansible

Posted November 26, 2017 3.8k views
Ansible Ubuntu 16.04 Object Storage

Using the Ansible 2.4.1.0 on a Ubuntu 16.04 target, I’m unable to upload files to DigitalOcean Spaces using aws_s3 module.

This is the playbook task:

    - name: Uploading backup
      aws_s3:
        aws_access_key: "..."
        aws_secret_key: "..."
        region: nyc3
        s3_url: "https://nyc3.digitaloceanspaces.com"
        bucket: "working-bucket"
        object: "my_file.tar.gz"
        src: "my_file.tar.gz"
        mode: put
        rgw: True

and this is the error:

{
   "changed":false,
   "module_stderr":"OpenSSH_7.4p1, LibreSSL 2.5.0\r\ndebug1: Reading configuration data /Users/viniciussantana/.ssh/config\r\ndebug3: kex names ok: [diffie-hellman-group1-sha1]\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 53: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 3258\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to dev-bra1.my-company.net closed.\r\n",
   "module_stdout":"Traceback (most recent call last):\r\n  File \"/tmp/ansible_3uxtv9/ansible_module_aws_s3.py\", line 863, in <module>\r\n    main()\r\n  File \"/tmp/ansible_3uxtv9/ansible_module_aws_s3.py\", line 772, in main\r\n    upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n  File \"/tmp/ansible_3uxtv9/ansible_module_aws_s3.py\", line 469, in upload_s3file\r\n    s3.upload_file(Filename=src, Bucket=bucket, Key=obj, ExtraArgs=extra)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto3/s3/inject.py\", line 110, in upload_file\r\n    extra_args=ExtraArgs, callback=Callback)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py\", line 283, in upload_file\r\n    filename, '/'.join([bucket, key]), e))\r\nboto3.exceptions.S3UploadFailedError: Failed to upload /home/vegbrasil/backups/dev-bra1_20171121141258.tar to working-bucket/dev-bra1_20171121141258.tar: An error occurred (InvalidRequest) when calling the PutObject operation: Unknown\r\n",
   "msg":"MODULE FAILURE",
   "rc":0
}

Does anyone managed to make it work? I’m not sure if this a problem in Ansible or just something else because there’s a little of upload activity before the error happens.

(also reported in the Ansible repo)

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

2 answers

I was running into this same issue myself, and was able to track down the problem. DigitalOcean Spaces does not support currently support the x-amz-server-side​-encryption header. Ansible is setting this by default. Adding encrypt: false to the put task allows it to proceed as expected.

Here is a full working example of creating a bucket, uploading a file to it, and then listing its contents:

---
- hosts: localhost
  vars:
    spaces_access_key: "{{ lookup('env','SPACES_ACCESS_KEY') }}"
    spaces_secret_key: "{{ lookup('env','SPACES_SECRET_KEY') }}"
    spaces_endpoint: https://nyc3.digitaloceanspaces.com
    spaces_region: nyc3
    space_name: my-ansible-bucket-001
    key_name: file.ext
    file_path: /path/to/my/file.ext

  tasks:
    - name: Creating Space
      aws_s3:
        aws_access_key: "{{ spaces_access_key }}"
        aws_secret_key: "{{ spaces_secret_key }}"
        s3_url: "{{ spaces_endpoint }}"
        bucket: "{{ space_name }}"
        mode: create
        rgw: True

    - name: Uploading object
      aws_s3:
        aws_access_key: "{{ spaces_access_key }}"
        aws_secret_key: "{{ spaces_secret_key }}"
        s3_url: "{{ spaces_endpoint }}"
        bucket: "{{ space_name }}"
        object: "{{ key_name }}"
        src: "{{ file_path }}"
        mode: put
        encrypt: false
        rgw: true

    - name: List objects in Space
      aws_s3:
        aws_access_key: "{{ spaces_access_key }}"
        aws_secret_key: "{{ spaces_secret_key }}"
        s3_url: "{{ spaces_endpoint }}"
        bucket: "{{ space_name }}"
        mode: list
        rgw: true
      register: spaces_items

    - debug:
        msg: "Contents of {{ space_name }}: {{ spaces_items.s3_keys }}"

Are you using Signature v4 for the API requests? I believe Spaces is currently not compatible with this.

  • I’m not sure which version of the Signature is Ansible using.

    As other tasks works (listing and downloading files, for example), my guess is that they’re using v2.

Submit an Answer