Community Curriculum

Kubernetes for Full-Stack Developers

By Topic

Table of Contents

Subscribe to this course

Stay on track with weekly reminders for each phase of the curriculum.

Whether you’re just curious, getting started with Kubernetes, or have experience with it, this curriculum will help you learn more about Kubernetes and running containerized applications. You’ll learn about core Kubernetes concepts and use them to deploy and scale applications in practical tutorials. By the end of this curriculum you’ll be able to create your own Kubernetes cluster from scratch and run your own applications on it. You will also learn how to set up monitoring, alerting, and automation for your applications on Kubernetes.

1. Introductory Topics

  • tutorial

    An Introduction to Kubernetes

    Kubernetes is a powerful open-source system that manages containerized applications in a clustered environment. It is designed to manage distributed applications and services across varied infrastructure.

    In this guide, we’ll discuss basic Kubernetes concepts. We will talk about its system architecture, the problems it solves, and the model that it uses to handle containerized deployments and scaling.

    After reading this guide, you should be familiar with core Kubernetes concepts like the kube-apiserver, Nodes, Pods, Services, Deployments, and Volumes.

    Other tutorials in this curriculum explore each of these components and their different use cases in further depth.

    Go to tutorial
  • tutorial

    How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04

    In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it. You will be able to use the cluster that you create in this tutorial in subsequent tutorials.

    While the first tutorial in this curriculum introduces some of the concepts and terms that you will encounter when running an application in Kubernetes, this tutorial focuses on the steps required to build a working Kubernetes cluster.

    This tutorial uses Ansible to automate some of the more repetitive tasks like user creation, dependency installation, and network setup in the cluster. If you would like to create a cluster manually, the tutorial provides a list of resources that includes the official Kubernetes documentation, which you can use instead of Ansible.

    By the end of this tutorial you should have a functioning Kubernetes cluster that consists of three Nodes (a master and two worker Nodes). You will also deploy Nginx to the cluster to confirm that everything works as intended.

    Go to tutorial
  • tutorial

    Webinar Series: A Closer Look at Kubernetes

    In this tutorial, you will learn how Kubernetes primitives work together as you deploy a Pod in Kubernetes, expose it a Service, and scale it through a Replication Controller.

    Go to tutorial
  • tutorial

    An Introduction to Helm, the Package Manager for Kubernetes

    Setting up and running an application on a Kubernetes cluster can involve creating multiple interdependent Kubernetes resources. Each Pod, Service, Deployment, and ReplicaSet requires its own YAML manifest file that must be authored and tested before an application is made available in a cluster.

    Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters. Helm packages are called charts, which consist of YAML configuration files and templates that reduce or eliminate the need to write YAML manifests from scratch to deploy an application.

    By the end of this tutorial, you should be familiar with Helm charts, and be able to decide if using a chart to deploy an application requires more or less work than writing YAML files directly.

    Go to tutorial
  • tutorial

    How To Install Software on Kubernetes Clusters with the Helm Package Manager

    The previous Helm tutorial introduced the concept of package management in a Kubernetes cluster. In this hands-on tutorial, we will set up Helm and use it to install, reconfigure, rollback, then delete an instance of the Kubernetes Dashboard application.

    By the end of this tutorial, you will have a working Kubernetes dashboard that you can use to administer your cluster. You will also have Helm set up so that you can install any of the supported open source applications in Helm’s official chart repository, as well as your own custom Helm charts.

    Go to tutorial

2. Containers, Modernizing Applications and 12 Factor Development

  • Welcome to Part 2 of the DigitalOcean Kubernetes for Full-Stack Developers course. In the previous section we covered Kubernetes fundamentals and deployed a demo web server on Kubernetes. In this section we will learn about architecting and modernizing applications to run on Kubernetes. We will also explore containerization with tools like Docker Compose, build a Node.js application that runs in a Docker container, and set up cluster monitoring using Helm and Prometheus Operator.

  • tutorial

    Architecting Applications for Kubernetes

    How you architect and design your applications will determine how you build and deploy them to Kubernetes. One design method that works well for applications that run on Kubernetes is called The Twelve-Factor App. It is a useful framework for building applications that will run on Kubernetes. Some of its core principles include separating code from configuration, making applications stateless, ensuring app processes are disposable (can be started and stopped with no side effects), and facilitating easy scaling. This tutorial will guide you through designing, scaling, and containerizing your applications using Twelve-Factor as a framework.

    Go to tutorial
  • tutorial

    Modernizing Applications for Kubernetes

    The previous tutorial explored key ideas and application design techniques to build applications that will run effectively on Kubernetes. This guide will focus on modernizing an existing application to run on Kubernetes. To prepare for migration, there are some important application-level changes to implement that will maximize your app’s portability and observability in Kubernetes.

    You will learn how to extract configuration data from code and externalize application state using databases and data stores for persistent data. You will also build in health checks and code instrumentation for logging and monitoring, thereby creating an infrastructure to identify errors in your cluster more effectively. After covering application logic, this tutorial examines some best practices for containerizing your app with Docker.

    Finally, this guide discusses some core Kubernetes components for managing and scaling your app. Specifically, you will learn how to use Pods, ConfigMaps, Secrets, and Services to deploy and manage a modernized application on Kubernetes.

    Go to tutorial
  • tutorial

    How To Build a Node.js Application with Docker

    This tutorial is a first step towards writing an example Node.js application that will run on Kubernetes. When building and scaling an application on Kubernetes, the starting point is typically creating a Docker image, which you can then run as a Pod in a Kubernetes cluster. The image includes your application code, dependencies, environment variables, and application runtime environment. Using an image ensures that the environment in your container is standardized and contains only what is necessary to build and run your application.

    In this tutorial, you will create an application image for a static website that uses the Express Node.js framework and Bootstrap front-end library. You will then push the image to Docker Hub for future use and then run a container using that image. Finally, you will pull the stored image from your Docker Hub repository and run another container, demonstrating how you can quickly recreate and scale your application. As you move through this curriculum, subsequent tutorials will expand on this initial image until it is up and running directly on Kubernetes.

    Go to tutorial
  • tutorial

    Containerizing a Node.js Application for Development With Docker Compose

    This tutorial is a second step towards writing an example Node.js application that will run on Kubernetes. Building on the previous tutorial, you will create two containers — one for the Node.js application and another for a MongoDB database — and coordinate running them with Docker Compose.

    This tutorial demonstrates how to use multiple containers with persistent data. It also highlights the importance of separating the application code from the data store. This design will ensure that the final Docker image for the Node.js application is stateless and that it will be ready to run on Kubernetes by the end of this curriculum.

    Go to tutorial
  • tutorial

    How to Set Up DigitalOcean Kubernetes Cluster Monitoring with Helm and Prometheus Operator

    Along with tracing and logging, monitoring and alerting are essential components of a Kubernetes observability stack. Setting up monitoring for your Kubernetes cluster allows you to track your resource usage and analyze and debug application errors.

    One popular monitoring solution is the open-source Prometheus, Grafana, and Alertmanager stack. In this tutorial you will learn how to use the Helm package manager for Kubernetes to install all three of these monitoring tools into your Kubernetes cluster.

    By the end of this tutorial, you will have cluster monitoring set up with a standard set of dashboards to view graphs and health metrics for your cluster, Prometheus rules for collecting health data, and alerts to notify you when something is not performing or behaving properly.

    Go to tutorial

3. Containers

  • It’s time for Phase 3 of the DigitalOcean Kubernetes for Full-Stack Developers course. In Phase 2 we learned about modernizing an application before containerizing it, and explored running a demo Node.js application in Docker. We also used Docker Compose to learn about running multiple containers together. This section of the curriculum will elaborate on using Docker Compose to coordinate development with multiple containers. Once you have worked through the tutorials in this section, you will be ready to start deploying your applications on a Kubernetes cluster.

  • tutorial

    How To Set Up Laravel, Nginx, and MySQL with Docker Compose

    In this tutorial, you will build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, all inside Docker containers. You will define the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

    Go to tutorial
  • tutorial

    How To Migrate a Docker Compose Workflow to Kubernetes

    To run your services on a distributed platform like Kubernetes, you will need to translate your Docker Compose service definitions to Kubernetes objects. Kompose is a conversion tool that helps developers move their Docker Compose workflows to container clusters like Kubernetes.

    In this tutorial, you will translate your Node.js application’s Docker Compose services into Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Node.js application with a MongoDB database running on a Kubernetes cluster.

    Go to tutorial
  • tutorial

    Building Optimized Containers for Kubernetes

    In this article you will learn some strategies for creating high-quality images and explore a few general goals to help guide your decisions when containerizing applications. The focus is on building images intended to be run on Kubernetes, but many of the suggestions apply equally to running containers on other orchestration platforms and in other contexts.

    There are a number of suggestions and best practices that you will learn about in this tutorial. Some of the more important ones are:

    1. Use minimal, shareable parent images to build application images. This strategy will ensure fast image builds and fast container start-up times in a cluster.
    2. Combine Dockerfile instructions to create clean image layers and avoid image caching mistakes.
    3. Containerize applications by isolating discrete functionality, and design Pods based on applications with a single, focused responsibility.
    4. Bundle helper containers to enhance the main container’s functionality or to adapt it to the deployment environment.
    5. Run applications as the primary processes in containers so Kubernetes can manage lifecycle events.
    Go to tutorial
  • tutorial

    How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm

    In this tutorial, you will deploy your Node.js shark application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image.

    By the end of this tutorial you will have deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm’s stable repository and other chart repositories.

    Go to tutorial
  • tutorial

    How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DigitalOcean Kubernetes

    In this tutorial, you’ll deploy a private Docker registry to your Kubernetes cluster using Helm. A self-hosted Docker Registry lets you privately store, distribute, and manage your Docker images. While this tutorial focuses on using DigitalOcean’s Kubernetes and Spaces products, the principles of running your own Registry in a cluster apply to any Kubernetes stack.

    At the end of this tutorial, you’ll have a secure, private Docker registry that uses DigitalOcean Spaces (or another S3-compatible object storage system) to store your images. Your Kubernetes cluster will be configured to use the self-hosted registry so that your containerized applications remain private and secure.

    Go to tutorial

4. Deployment Strategies

  • In Phase 4 of this curriculum, you will deploy a full PHP application with supporting Nginx Pods to your Kubernetes cluster. You will also automate deployments using Continuous Integration (CI) and Continuous Deployment (CD) techniques, leveraging tools like CircleCi and Spinnaker to build your CI/CD pipelines.

    By the end of this section you will be able to write your own Kubernetes object specs and manifests, including Services, ConfigMaps, and Secrets. You will also be able to build a deployment pipeline for your application and control access to your cluster using ServiceAccounts, which rely on Kubernetes’ Role-based access control (RBAC) authorization and permissions model.

  • tutorial

    How To Deploy a PHP Application with Kubernetes on Ubuntu 18.04

    In this tutorial, you will deploy a PHP application on a Kubernetes cluster with Nginx and PHP-FPM running in separate Pods. You will also learn how to keep your configuration files and application code outside the container image using DigitalOcean’s Block Storage system. This approach will allow you to reuse the Nginx image for any application that needs a web/proxy server by passing a configuration volume, rather than rebuilding the image.

    Go to tutorial
  • tutorial

    How To Automate Deployments to DigitalOcean Kubernetes with CircleCI

    Having an automated deployment process is a requirement for a scalable and resilient application. Tools like CircleCI allow you to test and deploy your code automatically every time you make a change to your source code repository. When this kind of CI/CD is combined with the flexibility of Kubernetes infrastructure, you can build an application that scales easily with changing demand.

    In this article you will use CircleCI to deploy a sample application to a DigitalOcean Kubernetes cluster. After reading this tutorial, you’ll be able to apply these same techniques to deploy other CI/CD tools that are buildable as Docker images.

    Go to tutorial
  • tutorial

    How To Set Up a CD Pipeline with Spinnaker on DigitalOcean Kubernetes

    In this tutorial, you’ll deploy Spinnaker, an open-source resource management and continuous delivery application, to your Kubernetes cluster. Spinnaker enables automated application deployments to many platforms and can integrate with other DevOps tools, like Jenkins and TravisCI. Additionally, it can be configured to monitor code repositories and Docker registries for completely automated Continuous Delivery development and deployment processes.

    By the end of this tutorial you will be able to manage applications and development processes on your Kubernetes cluster using Spinnaker. You will automate the start of your deployment pipelines using triggers, such as, when a new Docker image has been added to your private registry, or when new code is pushed to a git repository.

    Go to tutorial

5. Operate a Kubernetes Cluster

  • The last section in this curriculum will explore some more advanced Kubernetes topics. You’ll learn how Kubernetes networking and DNS work, why and how to use an Ingress controller instead of a LoadBalancer, and how to coordinate communication between application components using Service Meshes. You will also learn how to create regular backups and perform cluster upgrades.

    Since a well-run cluster needs centralized logging, you’ll also set up a logging stack using Elasticsearch, Fluentd, and Kibana. This setup will let you capture application logs from Pods in your cluster for easy troubleshooting and monitoring.

    Finally, this section will show how to use Kubernetes Ingresses with cert-manager to secure your services with Let’s Encrypt TLS certificates. Once your services are set up with TLS encryption, the last tutorial will show you how to protect your web services running in Kubernetes using oauth2_proxy for authentication.

  • tutorial

    Kubernetes Networking Under the Hood

    This tutorial discusses how data moves inside a Pod, between Pods, and between Nodes. It also shows how a Kubernetes Service can provide a single static IP address and DNS entry for an application, easing communication with services that may be distributed among multiple constantly scaling and shifting Pods. This tutorial also includes detailed hop-by-hop explanations of the different journeys that packets can take depending on the network configuration.

    Go to tutorial
  • tutorial

    How To Inspect Kubernetes Networking

    Maintaining network connectivity between all the containers in a cluster requires some advanced networking techniques. Thankfully Kubernetes does all of the work to set up and maintain its internal networking. However, when things do not work as expected, tools like kubectl, Docker, nsenter, and iptables are invaluable for inspecting and troubleshooting a Kubernetes cluster’s network setup. These tools are useful for debugging routing and connectivity issues, investigating network throughput problems, and generally exploring Kubernetes to learn how it operates.

    Go to tutorial
  • tutorial

    An Introduction to Service Meshes

    A service mesh is an infrastructure layer that allows you to manage communication between your application’s microservices. Service meshes are designed to facilitate service-to-service communication through service discovery, routing and internal load balancing, traffic configuration, encryption, authentication and authorization, and metrics and monitoring.

    This tutorial will use Istio’s Bookinfo sample application — four microservices that together display information about particular books — as a concrete example to illustrate how service meshes work.

    Go to tutorial
  • tutorial

    How To Back Up and Restore a Kubernetes Cluster on DigitalOcean Using Heptio Ark

    In this tutorial you will learn how to back up and restore your Kubernetes cluster. First you will set up and configure the backup client on a local machine, and deploy the backup server into your Kubernetes cluster. You’ll then deploy a sample Nginx app that uses a Persistent Volume for logging and simulate a disaster recovery scenario. After completing all the recovery steps you will have restored service to the test Nginx application.

    Go to tutorial
  • tutorial

    How To Set Up an Elasticsearch, Fluentd and Kibana (EFK) Logging Stack on Kubernetes

    When running multiple services and applications on a Kubernetes cluster, a centralized, cluster-level logging stack can help you quickly sort through and analyze the heavy volume of log data produced by your Pods. In this tutorial, you will learn how to set up and configure Elasticsearch, Fluentd, and Kibana (the EFK stack) on your Kubernetes cluster.

    To start, you will configure and launch a scalable Elasticsearch cluster on top of your Kubernetes cluster. From there you will create a Kubernetes Service and Deployment for Kibanaso that you can visualize and work with your logs. Finally, you will set up Fluentd as a Kubernetes DaemonSet so that it runs on every worker Node and collects logs from every Pod in your cluster.

    Go to tutorial
  • tutorial

    How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes

    Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This routing is accomplished using Ingress Resources, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services.

    In this guide, you will set up the Kubernetes-maintained Nginx Ingress Controller, and create some Ingress Resources to route traffic to several dummy backend services. Once the Ingress is in place, you will install cert-manager into your cluster to manage and provision TLS certificates using Let’s Encrypt for encrypting web traffic to your applications.

    Go to tutorial
  • tutorial

    How to Protect Private Kubernetes Services Behind a GitHub Login with oauth2_proxy

    Kubernetes ingresses make it easy to expose web services to the internet. When it comes to private services, however, you will likely want to limit who can access them. In this tutorial you’ll use oauth2_proxy with GitHub to protect your services. oauth2_proxy is a reverse proxy server that provides authentication using different providers, such as GitHub, and validates users based on their email address or other properties.

    By the end of this tutorial you will have setup oauth2proxy on your Kubernetes cluster and protected a private service behind a GitHub login. oauth2proxy also supports other OAuth providers like Google and Facebook, so by following this tutorial you will be able to protect your services using the provider of your choice.

    Go to tutorial