Hello all,

I am researching how best to move some apps hosted as a portfolio for a small digital agency, to managed Kubernetes on DO to reduce cost and management overhead.

They range a variety of technologies hosted in a multitude of ways, but each on their own corresponding VPS:

  • 3 Instances running a large PHP app (servers for the API, Frontend + ControlPanel and MongoDB)
  • 2 LAMP instances with WordPress
  • 5 LEMP instance with WordPress
  • 2 instances running a NodeJS app (servers for the app and MySQL)
  • 1 instance running a large Django app with MySQL

A total of 13 servers, where some are interconnected, but the majority are completely isolated.
Kubernetes is new to me, but I am actively upskilling every day, and working on containerizing those apps (reading material on this matter would be greatly appreciated)

What I am wondering, is what would be an ideal architecture to host all of them and possibly others in the future.

  • How to organize the clusters? Per project? Per technology? Shared?
  • What about the pods?
  • How to implement different environments for all, or some, of them (staging and prod)?
  • Is it ideal to share a containerized MySQL across various apps? Or better of with a db service per project?
  • Could autoscaling be problematic in a setup like this?
  • Should CI/CD be considered from the very beginning since this is a fresh start?

Thank you in advance

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

Hi,

you must be clear about k8s migration… I recommend first learn k8s to accomplish your goal… are you using Ansible as infrastructure orchestrator? thousand of vps could be managed with a few ansible playbooks…

My point is that you need to re-deploy your services as micro-services. Think about deploy your own Postgres, MySQL o MongoDB cluster or use offered by DOK? By the way, some ideas:

1. How to organize the clusters? Per project? Per technology? Shared?
In k8s you can “separate” projects using namespaces, but one namespace can access services en other namespace by default. In our case we have three different clusters: develop, staging and production. The access to different clusters is more controlled, resources limits, etc.

2. What about the pods?
The pods store your containers. The most important is implement resources limits by CPU and memory (necessary to implement HPA Autoscaler). Pods are like cows.

3. How to implement different environments for all, or some, of them (staging and prod)?
Start with a single cluster and create all required namespaces here. Then move prod and staging to a new clusters.

4. Is it ideal to share a containerized MySQL across various apps? Or better of with a db service per project?
Divide and win. The golden rule for microservices implementation is that each service has its own database. Sometimes you can’t, but that creates highly coupled applications, which is not good. Especially when you have pods that will be created and destroyed. For production, I recommend moving this responsibility to the infrastructure provider, since all backup, synchronization, high availability, I/O, throughput and more operations not be on your side. :)

  1. Could autoscaling be problematic in a setup like this?
    k8s have a good HPA (Y). Also the scaling can be Cluster Autoscaler, Vertical Scaling, etc. There are another solutions like Istio with similar amazing features. Autoscaling must be your priority.

  2. Should CI/CD be considered from the very beginning since this is a fresh start?
    Required. Also you’ll need an Private Image Registry (Portus or Harbor are good options), for CI/CD can use Jenkins, CircleCI or GitLab, for CT can use SonarQube, etc.

I hope this information allows you to have clearer the requirements to face :)

An option to consider: This is my personal opinion, but running MySQL scalably/reliably is not trivial. Consider using a managed MySQL service and connecting to it from your Kubernetes cluster.

Submit an Answer