Question

Connecting Spaces data to a Docker-installed Azuracast app Station Media folder

Before I navigate the whole new learning curve and cost of using Spaces, I would like this question below answered:

Assuming I have a Spaces bucket called MYAUDIO that has a varied subfolder structure of media files, primarily MP3, how do I make this folder structure “visible” to serve as a source of audio files for my Azuracast radio stations’ Media (Music) access?

=== In sum, how do I get a Docker-installed version of Azuracast to “see” and access the hypothetical MyAudio [Spaces storage] and its subfolders for Azuracast Media use? ===


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

I need to update the s3fs command in Step 4. of my earlier message to read as follows (notice this is a MULTI-LINE rendition of a single command using the line continuation left-slash symbol “\” ):

s3fs my-space /av/media \

-o passwd_file=${HOME}/.passwd-s3fs \

-o url=https://nyc3.digitaloceanspaces.com/ \

-o use_path_request_style

To get this modified command I followed LAURENT LAMAIRE’S Tutorial Article “Mounting DigitalOcean Spaces and Access Bucket From Droplet”.

Some notes:

  1. You must mkdir the folder structure to mount the spaces bucket yourself. the name is relatively unimportant, but the final folder must be empty, in this case the …/media folder is empty and ready to for the mount.
  2. Lamaire’s article requires you to generate Spaces Keys and copy them.
  3. Then, at the Terminal Command Line you echo these keys, separated by a colon (“:”), into a file ( ${HOME}/.passwd-s3fs )
  4. Also, it happens that my Spaces bucket is in the nyc3 Region, but, if your Region is different, you need to make the correct substitution.

Thanks, Bobby, but before I did everything I show below, I went to the S3 and NO MATTER WHAT CORRECT INFORMATION I ENTERED INTO THAT SCREEN IT WOULD NOT RECOGNIZE MY SPACES AREA NOR MAP THE INFORATION INTO MY STATIONS’ MEDIA SELECTION PAGE. Perhaps if you snapshot an example COMPLETED form for an NY3 SPACES named “my-space”, and Replied with it to this message, I would completely understand what to do. I’m not dumb, but, to the very, very best of my understanding on how to make the AZ S3 bucket screen work, I was not able to do so. GOD KNOWS, and after you witness what I had to cluge together to get it to work finally, I would have fallen to my knees in grateful tears to have gotten the AZ S3 interface to work.

I did eventually find some guidance online away from DO that allowed me successfully to get Azuracast (AZ) to use my media files.

The trick was to “lie” (nowadays that’s the thing, isn’t it?) to AZ, making it believe that my SPACES media files were a SUBFOLDER of my AZ Docker volumes folders!

OP/ED ASIDE #1: Don’t let DO Customer Support heavy-handedly quote their “superior engineers” on how awful and terrible and unadvised it is to mount and to use SPACES with AZ. It DOES work very well, once AZ “sees” the media content through both the mount and the Docker .yml diddling described below.

I will try to reconstruct the procedure I worked out, to gain AZ Station Media Page access, as carefully as I can for the others who are going cray cray, like I was initially, trying to get AZ to work with SPACES (no, the AZ onboard S3 AWS connection screen never did work properly for me, and I tired of trying to get all the parameters on it tweaked just so exactly right for it to work!)

For better or worse, here we go …

  1. Use the DO Dashboard to create a SPACES named area (ie, “my-space”).

  2. Run the following commands as root or, preferably, your AZ sudo user.

  3. Create an EMPTY Linux folder structure to mount your named SPACES area: sudo mkdir /av/media

  4. Mount your SPACES folder structure (took me fer-EVER to figure out this syntax, even with several DO articles available on the subject!) – NOTE: You may need to apt get the s3fs utility to make this work (NOTE CAREFULLY – THIS IS A SINGLE COMMAND!):

sudo s3fs my-space /av/media fuse.s3fs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0

4a. Add this s3fs command above also to your /etc/mtab file, so your SPACES volume mount repeats when your Droplet reboots.

  1. (OPT) At his point, I assume you have FileZilla or another quality SFTP file transfer app on your workstation – use it to access /av/media, then to create folders within and to upload your media files. Then you’ll be able to begin working with them when AZ “sees” them finally.

  2. Move to your AZ directory:

cd /var/azuracast

  1. Create and edit a Docker “override” file to complement docker-compose.yml parameters with new ones that add the my-spaces folders to AZ Media library files – Docker expects this file to be named docker-compose-override.yml: nano docker-compose.override.yml

  2. Within this new file, add the following lines – NOTE that “mystationN” is the name of your station(s) and S3 is arbitrary – call this latter phantom subfolder (hence, the “lie” you must tell AZ) that references the “/av/media” folder as a sort of “root” of anything you wish – AZ Station Media screen will only display the folders BELOW /av/media in this case. Also NOTE CAREFULLY that in this example, I wanted BOTH my stations to reference the ONE SPACES area MOUNT content, thus, giving both stations equal access to all /av/media content. If you have TWO DIFFERENT SPACES, you will want to have them point to different mount-ed volumes below, ie, /av/media1 or /av/media2 etc:

 services:
   web:
    volumes:
  - /av/media:/var/azuracast/stations/mystation1/media/s3
  - /av/media:/var/azuracast/stations/mystation2/media/s3
  1. Save and close docker-compose.override.yml …

  2. You will need to stop and restart Docker to get the SPACES mount-ed volume subfolders (“S3” in my example) recognized, so … sudo docker-compose down

  3. sudo docker-compose up -d

  4. I think I also Stopped and (re)Started my stations from within the AZ Dashboard to get each Station Media manager to see the new …/s3 SPACES media as a transparent subfolder structure.

  5. If you didn’t upload new media content yet to the /av/media area for AZ to access yet (see Step 5. above), now you can do so.

  6. Mark all this content, I’m guessing, as chmod 750 to avoid AZ giving you unable to access Errors.

Full disclosure: I consider myself an “enlightened and curious” user (mostly), not an SA nor a CS person, so no warranties nor support for what I’ve just shared. You may have things set up on your Droplet that will bolix initially when you try this approach. However, keep working with it to straighten out little kinks, and this approach should work for you eventually as it did for me.

Hi there,

You can do that by adding an S3 new storage location to your Azuracast installation.

To do that you can follow the steps from the official documentation here:

https://docs.azuracast.com/en/user-guide/storage-locations/s3-configuration

This also includes instructions for Digital Ocean Spaces.

Hope that this helps!

Best,

Bobby

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up