I was looking at deploying an elixir app to App Platform but the app in question requires instances to communicate with each other to support realtime clients connected over websockets.

I have 3 questions…

  1. Are inbound websocket connections supported on App Platform, if so is there any timeouts or gotchas?
  2. Is there any service discovery (dns maybe) for a container to discover other instances of the same App running?
  3. Is there any private networking routes between containers of the same App so assuming 2 is possible, is it then possible to talk between instances of the same App?
1 comment

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
3 answers

We have a socket.io app running on app platform and its works just fine ( 4 running currently)

So to answer your question

1) Yes, WS are supported

2) Not sure on this as its not specifically clear from the docs but my guess would be there is a load balancer that does a roundrobbing to each running pod. I believe its round robin because there is no option for sticky sessions so that would make sense

3) Again, I don’t see anything specific on this front but this would appear to be a kuberneties / docker cluster under the hood, more of a CAAS offering that automatically takes your repo and build a docker image on the fly and pushes it into the running kuberneties cluster. Spinning up another pod is similar to increasing your ‘instances’ in kuberneties.

With all that said it’s important to remember that WS are sticky by nature, so you don’t need to worry about staying on the same server, however to broadcast to other users connected to other running pods you will have to use something like Redis to facilitate the emits across all servers.

I’m not sure what tech you are using for WS, we use socket.io and socket.io-redis to do this as its easy to add with minimal code changes.

Doing this, each instance is stored in redis, so adding new nodes because redis keeps track of each server, when a request to emit is sent, it will go over all the socket servers and run the emit.

  • Thanks for clarifying the web socket support.

    It is more around the pub/sub backend side that I was interested in for 2 & 3. As you mention using Redis is a potential option but part of the beauty of Elixir is that this piece is unneeded as if networked correctly then intra-node messaging is easy and natural. If the networking is possible then the dependency on Redis is not needed.

    More specifically I am talking about https://hexdocs.pm/phoenix_pubsub/Phoenix.PubSub.html

    I already have this all working on a Kubernetes cluster so it’s not so much a question about how to do websockets at this point but more about the technical limitations of the App Platform as I am considering the possibility of a migration.

I’m not familiar with Elixir so I cant really speak on that but from what I have seen, currently the pods don’t communicate because the IP’s are dynamic and I don’t see any internal IP’s that a cluster could map to however if it is Kuberneties under the hood I would imagine it could be done but from what I have seen its not exposed just yet (may need DO to chime in here)

As far as migrating to app platform, personally I would just stay on your current kuberneties setup until DO changes the bandwidth limitations of app platform.

I’m using it right now for an event we are hosting as more of a test, but for long-term the bandwidth / cost is way of out wack.

If you check my previous post about it you will see that the bandwidth cost will bite you in the butt cost wise.

I like the CAAS offering but hosting your own cluster will give you more horsepower at a substantial cost savings.

Trust me, I love the idea to have this on-demand CAAS service and if the bandwidth limitation was more in-line I would jump in because this make its incredibly easy to manage running pods, similar to how openshift works. My main gripe is bandwidth is pooled across apps not pods, so you pay 12.00 for 1 pod in an app, adding more pods cost 12.00 but no extra bandwidth, it seems if we are paying 12.00 for each but getting nothing more then compute the price should reflect that and not be 12.00 but something smaller to account for the lack of extra bandwidth etc.

Also, not to rant but the cost per gig/transfer after is 900% more than your standard droplet .1 vs .01

This might be answered over here - https://www.digitalocean.com/community/questions/app-platform-component-to-component-requests?answer=66424

(It seems possible to have inter-service networking within an app, though I haven’t yet tried it myself)