I had this working and somewhere broke the lb healthchecks / allowing traffic to the pods.
I have tried everything and am stuck.
Am I missing something odd? I have noticed that the health check is using the nodeport instead of the direct container port.
i have verified the private firewall is configured properly.
is that something to do with it?
---
apiVersion: v1
kind: Namespace
metadata:
name: allspark
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: allspark
name: echo-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: auth.vikingson.xyz
http:
paths:
- pathType: Prefix
path: "/auth"
backend:
service:
name: service-auth-token
port:
number: 80
- pathType: Prefix
path: "/"
backend:
service:
name: service-test
port:
number: 80
- host: api.optimus.vikingson.xyz
http:
paths:
- pathType: Prefix
path: "/features"
backend:
service:
name: service-feature-flag
port:
number: 80
- pathType: Prefix
path: "/users"
backend:
service:
name: service-users
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: allspark
name: service-test
spec:
ports:
- port: 80
targetPort: 9011
protocol: TCP
type: ClusterIP
selector:
app.kubernetes.io/name: deployment-test
---
apiVersion: v1
kind: Service
metadata:
namespace: allspark
name: service-auth-token
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
type: ClusterIP
selector:
app.kubernetes.io/name: deployment-auth-token
---
apiVersion: v1
kind: Service
metadata:
namespace: allspark
name: service-feature-flag
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
type: ClusterIP
selector:
app.kubernetes.io/name: deployment-ff-api
---
apiVersion: v1
kind: Service
metadata:
namespace: allspark
name: service-users
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
type: ClusterIP
selector:
app.kubernetes.io/name: deployment-user-api
---
apiVersion: v1
kind: Service
metadata:
namespace: allspark
name: nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "masked"
service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-size-unit: "1"
# service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "true"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "8000"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/healthcheck"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-check-interval-seconds: "3"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-response-timeout-seconds: "5"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-healthy-threshold: "5"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: deployment-user-api
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
- name: healthcheck
protocol: TCP
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: allspark
name: deployment-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: deployment-test
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: deployment-test
spec:
containers:
- image: registry.digitalocean.com/test/test4
imagePullPolicy: Always
resources:
requests:
cpu: '200m'
limits:
cpu: "300m"
name: deployment-test
ports:
- containerPort: 9011
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: allspark
name: deployment-ff-api
spec:
selector:
matchLabels:
app.kubernetes.io/name: deployment-ff-api
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: deployment-ff-api
spec:
containers:
- image: registry.digitalocean.com/test/test3
env:
- name: PORT
value: "8000"
- name: ADDRESS
value: "0.0.0.0"
imagePullPolicy: Always
resources:
requests:
cpu: '200m'
limits:
cpu: "300m"
name: deployment-ff-api
ports:
- containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: allspark
name: deployment-user-api
spec:
selector:
matchLabels:
app.kubernetes.io/name: deployment-user-api
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: deployment-user-api
spec:
containers:
- image: registry.digitalocean.com/test/test
env:
- name: PORT
value: "8000"
- name: ADDRESS
value: "0.0.0.0"
- name: Audience
value: ""
- name: Issuer
value: ""
imagePullPolicy: Always
resources:
requests:
cpu: '200m'
limits:
cpu: "300m"
name: deployment-user-api
ports:
- containerPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: allspark
name: deployment-auth-token
spec:
selector:
matchLabels:
app.kubernetes.io/name: deployment-auth-token
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: deployment-auth-token
spec:
containers:
- image: registry.digitalocean.com/test/test2
env:
- name: PORT
value: "8000"
- name: ADDRESS
value: "0.0.0.0"
imagePullPolicy: Always
resources:
requests:
cpu: '200m'
limits:
cpu: "300m"
name: deployment-auth-token
ports:
- containerPort: 8000
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
solved it, i looked at the controllers for nginx-ingress and figured out it was looking for base controller vs the new one created. so i reinstalled nginx-ingress, got a new load balancer and configured that.
everything works.