Tutorial

How To Use Telepresence on Kubernetes for Rapid Development on Ubuntu 20.04

How To Use Telepresence on Kubernetes for Rapid Development on Ubuntu 20.04

The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

Introduction

Application developers building microservices on Kubernetes often encounter two major problems that slow them down:

  • Slow feedback loops. Once a code change is made, it must be deployed to Kubernetes to be tested. This requires a container build, push to a container registry, and deployment to Kubernetes. This adds minutes to every code iteration.
  • Insufficient memory and CPU locally. Developers attempt to speed up the feedback loop by running Kubernetes locally with minikube or the equivalent. However, resource-hungry applications quickly exceed the compute and memory available locally.

Telepresence is a Cloud-Native Computing Foundation project for fast, efficient development on Kubernetes. With Telepresence, you run your service locally, while you run the rest of your application in the cloud. Telepresence creates a bi-directional network connection between your Kubernetes cluster and your local workstation. This way, the service you’re running locally can communicate with services in the cluster, and vice versa. That allows you to use the compute and memory resources of the cluster, but without having to go through a complete deployment cycle for each change.

In this tutorial, you’ll configure Telepresence on your local machine running Ubuntu 20.04 to work with a Kubernetes cluster. You’ll intercept traffic to your cluster and redirect it to your local environment.

If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.

Prerequisites

To complete this tutorial, you will need:

Step 1 — Installing Telepresence

In this step, you’ll install Telepresence and connect it to your Kubernetes cluster. First, make sure that you have kubectl configured and that you can connect to your Kubernetes cluster from your local workstation. Use the get services command to check your cluster’s status:

  1. kubectl get services

The output will look like this, with your own cluster’s IP address listed:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 116m

Next you’ll install Telepresence locally. Telepresence comes as a single binary.

Use curl to download the latest binary for Linux (around 50 MB):

  1. sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o /usr/local/bin/telepresence

Then use chmod to make the binary executable:

  1. sudo chmod a+x /usr/local/bin/telepresence

Now that you have Telepresence installed locally, you can verify that it worked by connecting to your Kubernetes cluster:

  1. telepresence connect

You’ll see the following output:

Output
Launching Telepresence Daemon ... Connected to context default (https://<cluster public IP>)

If Telepresence doesn’t connect, check your kubectl configuration.

Verify that Telepresence is working properly by connecting to the Kubernetes API server with the status command:

  1. telepresence status

You will see the following output. Telepresence Proxy: ON indicates that Telepresence has configured a proxy to access services on the cluster.

Output
Root Daemon: Running Version : v2.1.4 (api 3) Primary DNS : "" Fallback DNS: "" User Daemon: Running Version : v2.1.4 (api 3) Ambassador Cloud : Logged out Status : Connected Kubernetes server : https://7c10e553-10d1-4fee-9b7d-1ccbce4cdd34.k8s.ondigitalocean.com Kubernetes context: <your_kubernetes_context> Telepresence proxy: ON (networking to the cluster is enabled) Intercepts : 0 total Connected Context: do-tor1-k8s-bg-telepresence (https://bee66877-1b07-4bb1-8c8f-4fd62e416865.k8s.ondigitalocean.com) Proxy: ON (networking to the cluster is enabled) Intercepts: 0 total

When you use telepresence connect, on the server side, Telepresence creates a namespace called ambassador and runs a traffic manager. On the client side, Telepresence sets up DNS to enable local access to remote servers. That means you do not have to use kubectl port-forward to manually configure access to local services. When you access a remote service the DNS resolves to a specific IP address. For more details, see the Telepresence architecture documentation.

You can now connect to the remote Kubernetes cluster from your local workstation, as if the Kubernetes cluster were running on your laptop. Next you’ll try out a sample application.

Step 2 — Adding a Sample Node.js Application

In this step, you’ll use a simple Node.js application to simulate a complex service running on your Kubernetes cluster. Instead of creating the file locally, you’ll access it from DockerHub and deploy it to your cluster from there. The file is called hello-node, and returns a text string:

var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200, {'Content-Type': 'text/plain'});
  response.write('Hello, Node!');
  response.end();
};

http.createServer(handleRequest).listen(9001);
console.log('Use curl <hostname>:9001 to access this server...');

Use the kubectl create deployment command to create a deployment called hello node:

  1. kubectl create deployment hello-node --image=docommunity/hello-node:1.0

You will see the following output:

Output
deployment.apps/hello-node created

Use the get pod command to confirm that the deployment has occurred and the app is now running on the cluster:

  1. kubectl get pod

The output will show a READY status of 1/1.

Output
NAME READY STATUS RESTARTS AGE hello-node-86b49779bf-9zqvn 1/1 Running 0 11s

Use the expose deployment command to make the application available on port 9001:

  1. kubectl expose deployment hello-node --type=LoadBalancer --port=9001

The output will look like this:

Output
service/hello-node exposed

Use the kubectl get svc command to check that the load balancer is running:

  1. kubectl get svc

The output will look like this, with your own IP addresses:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.245.75.48 <pending> 9001:30682/TCP 4s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 6d

If you are using local Kubernetes without load balancer support, then the external IP value for LoadBalancer will show as <pending> permanently. That is fine for the purposes of this tutorial. If you are using DigitalOcean Kubernetes, you should see the external IP value will display the IP address after a delay.

Next, verify that the application is running by using curl to access the load balancer:

  1. curl <ip-address>:9001

If you’re not running a load balancer, you can use curl to access the service directly:

  1. curl <servicename>.<namespace>:9001

The output will look like this:

Output
Hello, Node!

Next, use the telepresence connect command to connect Telepresence to the cluster:

  1. telepresence connect

This allows you to access all remote services as if they were local, so you can access the service by name:

  1. curl hello-node.default:9001

You’ll receive the same response as you did when you accessed the service via its IP:

Output
Hello, Node!

The service is up and running on the cluster, and you can access it remotely. If you make any changes to the hello-node.js app, you’d need to take the following steps:

  • Modify the app.
  • Rebuild the container image.
  • Push it to a container registry.
  • Deploy to Kubernetes.

That is a lot of steps. You could use tooling (automated pipelines, such as Skaffold) to reduce the manual work. But the steps themselves cannot be bypassed.

Now you’ll build another version of our hello-node app, and use Telepresence to test it without having to build the container image or push it to registry or even deploy to Kubernetes.

Step 3 — Running a New Version of the Service Locally

In this step, you’ll modify the existing hello-node application on your local machine. You’ll then use Telepresence to route traffic to the local version with a Telepresence intercept. The intercept takes traffic intended for your cluster and reroutes it to your local version of the service, so you can continue working in your development environment.

Create a new file containing a modified version of the sample application:

  1. nano hello-node-v2.js

Add the following code to the new file:

hello-node-v2.js
var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200, {'Content-Type': 'text/plain'});
  response.write('Hello, Node V2!');
  response.end();
};

http.createServer(handleRequest).listen(9001);

Save and exit the file.

Start the service with Node:

  1. node hello-node-v2.js

Leave the service running, then open a new terminal window and access the service:

  1. curl <ip_address>:9001

The output will look like this:

Output
Hello, Node V2!

This service is only running locally, however. If you try to access the remote server, it is currently running version 1 of hello-node. To fix that, you’ll enable an intercept to route all traffic going to the hello-node service in the cluster to the local version of the service.

Use the intercept command to set up the intercept:

  1. telepresence intercept hello-node --port 9001

The output will look like this:

Output
Using deployment hello-node intercepted Intercept name : hello-node State : ACTIVE Destination : 127.0.0.1:9001 Volume Mount Error: sshfs is not installed on your local machine Intercepting : all TCP connections

Check that the intercept has been set up correctly with the status command:

  1. telepresence status

The output will look like this:

Output
Root Daemon: Running Version : v2.1.4 (api 3) Primary DNS : "" Fallback DNS: "" User Daemon: Running Version : v2.1.4 (api 3) Ambassador Cloud : Logged out Status : Connected Kubernetes server : https://7c10e553-10d1-4fee-9b7d-1ccbce4cdd34.k8s.ondigitalocean.com Kubernetes context: <your_kubernetes_context> Telepresence proxy: ON (networking to the cluster is enabled) Intercepts : 1 total hello-node: user@context

Now access the remote service with curl as you did previously:

  1. curl <ip-address>:9001

The output will look like this:

Output
Hello, Node V2!

Now, any messages sent to the service on the cluster are redirected to the local service. This is useful in the development stage, because you can avoid the deployment loop (build, push, deploy) for every individual change to your code.

Conclusion

In this tutorial, you’ve installed Telepresence on your local machine, and demonstrated how to make code changes in your local environment without having to deploy to Kubernetes every time you make a change. For more tutorials and information about Telepresence, see the Telepresence documentation.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar

CEO



Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more