A question can only have one accepted answer. Are you sure you want to replace the current answer with this one?
You previously marked this answer as accepted. Are you sure you want to unaccept it?
Write for DigitalOcean
You get paid, we donate to tech non-profits.
Find and meet other developers in your city.
Contribute to Open Source
Any tutorial on doing Docker Auto Deployments to DO?
Did you look into using webhooks and handling them on your server using tools like https://github.com/adnanh/webhook or https://www.hookdoo.com/ ?
Add comments here to get more clarity or context around a question. To answer a question, use the “Answer” field below.
You can type !ref in this text area to quickly search our full set of tutorials, documentation &
marketplace offerings and insert the link!
I have created a Docker Swarm (private network) - complete with H/A & Im at the stage of setting up TLS....however the nodes must recognise each other from hostname and not ipaddress.
When I try to ping either of the 7 nodes within the private network using the hostname, it does not work, however I can ping with the private ip address.
I did try to set up a DNS - however this did not resolve my issue.
Thank you and look forward to any suggestions that may lead me to a solution.
I have several web sites. I would like to have each in a separate Docker container, to keep them isolated from one another. If I had multiple IP addresses I could bind each IP to a container. like this:
docker run -p 10.0.0.10:80:80 -name container1 <someimage> <somecommand>
docker run -p 10.0.0.11:80:80 -name container2 <someimage> <somecommand>
But, since on here I only have a single IP that is out of the question. Does anyone have any idea how I could make it work? Only thing I can think of would be to run a load balancer/reverse proxy or NAT or something similar. I know I could put them on a different port but then I could loose traffic by it being blocked because it is not common like 80.
If anyone can provide some insight it will be greatly appreciated.
On my local machine setting `--net=host` on a container makes the container visible on any ports it is using. On a Docker Ubuntu 16.04 droplet the port is not accessible with `--net=host`.
`docker run -p 8000:8000 -it python python -m http.server`
This does not work:
`docker run --net=host -p 8000:8000 -it python python -m http.server`
How can I make the ports of the container application exposed with `docker run --net=host ...`?
I apologize if this is trivial for some -- but what would be your recommended way to transfer data between my local files and the Rstudio on the Docker? I can use Filezilla to access my Droplet folder, but would like to view RStudio's working directory, which was simply "/home/rstudio".
Ideally, I would like to simply upload some .csv files to the Droplet, run some analyses in the Docker, export some documents and load them on my local computer, but it is not intuitively clear what would be the most efficient way to achieve this goal.