Question

Kubernetes - Deployments became unstable after sharing same LoadBalancer

Posted April 27, 2020 365 views
Kubernetes

Hello there,

I’m trying to share one loadbalancer (client-service) between two deployments(client-api and client-web). After apply the configuration, the two services works perfectly. However, after some minutes, the loadbalancer stop working, causing the service unavailabilty.

In order to restart the loadbalancer, I have to press F5 for while and the service start working.

Could someone help me?f you guys need more information, please, let me know.

kind: Service
apiVersion: v1
metadata:
  name: client-service
spec:
  type: LoadBalancer
  selector:
    app: client-app
  ports:
    - name: client-api
      protocol: TCP
      port: 3101
      targetPort: 3101
    - name: client-web
      protocol: TCP
      port: 8888
      targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client-app
  revisionHistoryLimit: 2
  template:
    metadata:
      labels:
        app: client-app
    spec:
      containers:
      - name: client-api
        image: <DOCKER_IMAGE>
        imagePullPolicy: Always
        ports:
        - containerPort: 3101
          protocol: TCP
      imagePullSecrets:
        - name: <DOCKER_SECRETS>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client-web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client-app
  template:
    metadata:
      labels:
        app: client-app
    spec:
      containers:
      - name: client-web
        image: <DOCKER_IMAGE>
        imagePullPolicy: Always
        ports:
        - containerPort: 8888
          protocol: TCP
      imagePullSecrets:
        - name: <DOCKER_SECRETS>

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

The proper way to share an LB between two services is to use an ingress controller along with ingress objects to specify routing rules.

You can find an example in our community tutorials for how to accomplish this (you may not need to cert-manager part):

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

by Hanif Jetha
In this tutorial, learn how to set up and secure an Nginx Ingress Controller with Cert-Manager on DigitalOcean Kubernetes.
Submit an Answer