i am new on docker and Kubernetes and tring to figure out the following scenario.

Consider that user makes a request to foo1.bar.com and there is no up containers waiting to handle this request at the moment.

So container with a specific .env file or specific variables (app specific) will be up and request will be forwarded to this container.

If any container is not used (did not get any request) will be down to release its resources.

How can i do this? Kubernetes and Docker will propably handle but which key concepts do i need to focus on? Do i need a proxy or Dns service?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

3 answers

@doganbros If you have applied the appropriate resource definitions to the vanilla Kubernetes cluster, then one should be able to successfully make requests to the available services.

Now, if you’re looking for a setup that services requests on-demand and releases resources when idle, then I would recommend looking into using serverless technologies like Knativ or OpenWhisk.

Well, I wish that the information above was helpful to you. If not, please feel free to post a follow-up question.

Think different and code well,


Hi Conrad. Thank you very much for your reply.

Here in my case, i need to release the resources when it is not used.
May be better to give more info about the nature of the system.
Basically it is a online conferencing solution in which when a conference ends it is resources should be released to be used for other conference.
Each conference has its own subdomain.

In a Kubernetes environment, can i do the following? If i can provide X amaount of instances waiting idle. And when a new request arrived, can a configure some of the idle instances (dockers) according to the users request (i.e. the subdomain of the request) reload the instance and route this request accordingly.

Thank u

Interesting scenario. I wonder if you’d be better off routing all your subdomains to the same service? The service can detect which subdomain is being used from the request headers and act accordingly.

In this way, all your subdomains would share the same backend service, so you wouldn’t have to spin things up and down so frequently…

  • I think you are absolutely right about simplicity. Our current scenario is exactly like this.

    But i was thinking about more efficient way to use the resources just on demand.

    • I think the approach I’m suggesting makes sense also from the efficiency standpoint. You basically end up with a single pool of instances of your server running. So maybe 5 small conferences can run on a single instance, or 5 small conferences and one large one might need 3 instances total.

      This approach allows you to think of your total demand instead of calculating the demand of each subdomain separately.

      Additionally, won’t you always at least want one instance running? That way, a server is always available when then that first request comes in.

Submit an Answer