Report this

What is the reason for this report?

Moving multitude of web-apps to Kubernetes

Posted on January 15, 2020

Hello all,

I am researching how best to move some apps hosted as a portfolio for a small digital agency, to managed Kubernetes on DO to reduce cost and management overhead.

They range a variety of technologies hosted in a multitude of ways, but each on their own corresponding VPS:

  • 3 Instances running a large PHP app (servers for the API, Frontend + ControlPanel and MongoDB)
  • 2 LAMP instances with WordPress
  • 5 LEMP instance with WordPress
  • 2 instances running a NodeJS app (servers for the app and MySQL)
  • 1 instance running a large Django app with MySQL

A total of 13 servers, where some are interconnected, but the majority are completely isolated. Kubernetes is new to me, but I am actively upskilling every day, and working on containerizing those apps (reading material on this matter would be greatly appreciated)

What I am wondering, is what would be an ideal architecture to host all of them and possibly others in the future.

  • How to organize the clusters? Per project? Per technology? Shared?
  • What about the pods?
  • How to implement different environments for all, or some, of them (staging and prod)?
  • Is it ideal to share a containerized MySQL across various apps? Or better of with a db service per project?
  • Could autoscaling be problematic in a setup like this?
  • Should CI/CD be considered from the very beginning since this is a fresh start?

Thank you in advance



This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hi,

you must be clear about k8s migration… I recommend first learn k8s to accomplish your goal… are you using Ansible as infrastructure orchestrator? thousand of vps could be managed with a few ansible playbooks…

My point is that you need to re-deploy your services as micro-services. Think about deploy your own Postgres, MySQL o MongoDB cluster or use offered by DOK? By the way, some ideas:

1. How to organize the clusters? Per project? Per technology? Shared? In k8s you can “separate” projects using namespaces, but one namespace can access services en other namespace by default. In our case we have three different clusters: develop, staging and production. The access to different clusters is more controlled, resources limits, etc.

2. What about the pods? The pods store your containers. The most important is implement resources limits by CPU and memory (necessary to implement HPA Autoscaler). Pods are like cows.

3. How to implement different environments for all, or some, of them (staging and prod)? Start with a single cluster and create all required namespaces here. Then move prod and staging to a new clusters.

4. Is it ideal to share a containerized MySQL across various apps? Or better of with a db service per project? Divide and win. The golden rule for microservices implementation is that each service has its own database. Sometimes you can’t, but that creates highly coupled applications, which is not good. Especially when you have pods that will be created and destroyed. For production, I recommend moving this responsibility to the infrastructure provider, since all backup, synchronization, high availability, I/O, throughput and more operations not be on your side. :)

  1. Could autoscaling be problematic in a setup like this? k8s have a good HPA (Y). Also the scaling can be Cluster Autoscaler, Vertical Scaling, etc. There are another solutions like Istio with similar amazing features. Autoscaling must be your priority.

  2. Should CI/CD be considered from the very beginning since this is a fresh start? Required. Also you’ll need an Private Image Registry (Portus or Harbor are good options), for CI/CD can use Jenkins, CircleCI or GitLab, for CT can use SonarQube, etc.

I hope this information allows you to have clearer the requirements to face :)

An option to consider: This is my personal opinion, but running MySQL scalably/reliably is not trivial. Consider using a managed MySQL service and connecting to it from your Kubernetes cluster.

Moving your digital agency’s portfolio to managed Kubernetes on DigitalOcean (DO) is a strategic choice that can enhance scalability, reduce overhead, and streamline operations. To help guide you through this transition, I’ll address your questions and provide an overview of an ideal architecture, considering the variety and complexity of your applications.

1. Organizing Clusters and Pods

Cluster Organization:

  • By Project: If apps need to scale independently and have varying resource needs, organizing clusters by project might be the better option. This isolates projects, simplifying management and security.
  • By Technology: This can be beneficial if you have a significant number of similar tech stacks, e.g., all WordPress sites could potentially share a cluster. This reduces overhead but can complicate scaling and isolation.
  • Shared Cluster: While this is the most resource-efficient, it might increase complexity in management, monitoring, and scaling.

Given the variety in your portfolio, a combination approach might work best. For example, cluster common technologies like WordPress but keep large, complex applications like your large Django app in separate clusters if they have specific needs.

Pod Organization:

  • Organize pods so that each component of your applications (e.g., frontend, backend, database) runs in separate pods. This enhances scalability and maintainability.
  • Use Kubernetes namespaces to logically separate different environments (staging, production) within the same cluster.

2. Different Environments Implementation

  • Utilize namespaces in Kubernetes to separate environments. This can help you manage staging and production environments within the same cluster without interference.
  • Use different deployment configurations and services for each namespace. Ensure that configuration management tools (like Helm charts) are parameterized to deploy to specific namespaces.

3. Database Management

  • Shared vs. Dedicated Databases: Sharing a MySQL instance across multiple apps can reduce costs but might increase the risk of performance bottlenecks and complicate backup and restore processes. Consider using Kubernetes operators like the MySQL Operator to manage independent MySQL instances within the cluster.
  • Managed Database Services: For critical applications, consider using DO’s managed databases to ensure high availability, backups, and scalability without the overhead of managing them yourself in Kubernetes.

4. Autoscaling Considerations

  • Autoscaling within Kubernetes can be set up using Horizontal Pod Autoscalers (HPA). It monitors your application load and automatically adjusts the number of pod replicas.
  • Be mindful of database connections and other shared resources when autoscaling; these can become bottlenecks if not appropriately configured.

5. CI/CD Integration

  • CI/CD is crucial from the beginning. It automates the deployment process, ensuring that updates to your applications are smooth and error-free.
  • Implement CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. These tools can build your containers, run tests, and deploy to Kubernetes automatically.
  • Consider integrating quality checks, security scans, and performance tests into your CI/CD pipelines to maintain high standards.
  • Kubernetes Official Documentation: Comprehensive resource for all aspects of Kubernetes.
  • “Kubernetes Up & Running” by Kelsey Hightower: Great for getting a deep understanding of how Kubernetes works.
  • DigitalOcean Kubernetes Resources: Guides and tutorials specifically for managing Kubernetes in DO.
  • “The DevOps Handbook”: Offers insights into efficient CI/CD practices.

Final Thoughts

Transitioning to Kubernetes is an excellent opportunity to standardize and simplify the deployment and management of your applications. It’s advisable to start small, perhaps by moving less critical applications first to gain familiarity with Kubernetes operations. Once you are comfortable, you can proceed to migrate more significant parts of your portfolio. Remember, Kubernetes management can be complex initially but offers substantial benefits in scalability, resilience, and efficiency in the long run.

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.