I receive HTTP 503:
HTTP 503 Service Unavailable. No server is available to handle this request
from my K8S Nginx LoadBalancer all the time, now constantly, previously every 2nd-ish time. All of the K8S droplets are reported healthy. This happens for the 2nd day in a row now (started: 2022-08-02).
See:
curl -H 'host: u00.hix.dev' http://159.89.252.251 --verbose
I use the host header only because of many recreates and DNS domain caching. When dig +short u00.hix.dev
resolves only to 159.89.252.251
you can simply:
curl http://u00.hix.dev --verbose
The response:
* Trying 159.89.252.251:80...
* Connected to 159.89.252.251 (159.89.252.251) port 80 (#0)
> GET /api/health HTTP/1.1
> Host: u00.hix.dev
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< cache-control: no-cache
< content-length: 104
< content-type: text/html
< date: Wed, 03 Aug 2022 13:40:01 GMT
<
* Connection #0 to host 159.89.252.251 left intact
<html><body><h1>503 Service Unavailable</h1>No server is available to handle this request.</body></html>
kubernetes/ingress-nginx
helm chartuse-proxy-protocol: true
) and in the Service annotation (service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
), as mentioned here and done in kubernetes/ingress-nginx DO provideru00
K8S namespace.curl
from one service to the other one over internal K8S http. I installed curl
on the hix-api
Pod, and then executed kubectl exec -it -nu00 hix-api-65994c8448-lg682 -- curl http://hix-web:4000/api/health --verbose
, receiving HTTP 200. It also worked with curl http://hix-cms:5000/_health --verbose
.digitalocean_kubernetes_cluster
.See more detailed descriptions of the K8S resources below.
Result of kubectl get svc -nlb hix-lb-ingress-nginx-controller
:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hix-lb-ingress-nginx-controller LoadBalancer 10.245.217.41 hix.dev 80:30080/TCP,443:30443/TCP 4h4m
Result of kubectl get svc -nlb hix-lb-ingress-nginx-controller -o yaml | xc
apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.digitalocean.com/load-balancer-id: 1a9ffc3f-6389-4d42-992c-1c5f20020b51
meta.helm.sh/release-name: hix-lb
meta.helm.sh/release-namespace: lb
service.beta.kubernetes.io/do-loadbalancer-certificate-id: 96944fa9-164b-4d8d-9cfa-e7713c7e8a7e
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "31111"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: tcp
service.beta.kubernetes.io/do-loadbalancer-hostname: hix.dev
service.beta.kubernetes.io/do-loadbalancer-id: 1a9ffc3f-6389-4d42-992c-1c5f20020b51
service.beta.kubernetes.io/do-loadbalancer-name: hix-lb
service.beta.kubernetes.io/do-loadbalancer-size-unit: "1"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
creationTimestamp: "2022-08-03T10:14:20Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: hix-lb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.3.0
helm.sh/chart: ingress-nginx-4.2.0
name: hix-lb-ingress-nginx-controller
namespace: lb
resourceVersion: "75175"
uid: 9a2c35da-d37b-42c9-b9a8-9b3c0fcb0365
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.245.217.41
clusterIPs:
- 10.245.217.41
externalTrafficPolicy: Local
healthCheckNodePort: 31917
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
- appProtocol: https
name: https
nodePort: 30443
port: 443
protocol: TCP
targetPort: 443
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: hix-lb
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: hix.dev
Result of kubectl get ingress -nu00 hix-u00
NAME CLASS HOSTS ADDRESS PORTS AGE
hix-u00 nginx u00.hix.dev,api.u00.hix.dev,cms.u00.hix.dev hix.dev 80 3h58m
Result of kubectl get ingress -nu00 hix-u00 -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"hix-u00","namespace":"u00"},"spec":{"defaultBackend":{"service":{"name":"hix-web","port":{"number":4000}}},"ingressClassName":"nginx","rules":[{"host":"u00.hix.dev","http":{"paths":[{"backend":{"service":{"name":"hix-web","port":{"number":4000}}},"path":"/","pathType":"Prefix"}]}},{"host":"api.u00.hix.dev","http":{"paths":[{"backend":{"service":{"name":"hix-api","port":{"number":3000}}},"path":"/","pathType":"Prefix"}]}},{"host":"cms.u00.hix.dev","http":{"paths":[{"backend":{"service":{"name":"hix-cms","port":{"number":5000}}},"path":"/","pathType":"Prefix"}]}}]}}
creationTimestamp: "2022-08-03T10:21:56Z"
generation: 1
name: hix-u00
namespace: u00
resourceVersion: "39240"
uid: a898b695-7410-4334-9e93-82d9892724c1
spec:
defaultBackend:
service:
name: hix-web
port:
number: 4000
ingressClassName: nginx
rules:
- host: u00.hix.dev
http:
paths:
- backend:
service:
name: hix-web
port:
number: 4000
path: /
pathType: Prefix
- host: api.u00.hix.dev
http:
paths:
- backend:
service:
name: hix-api
port:
number: 3000
path: /
pathType: Prefix
- host: cms.u00.hix.dev
http:
paths:
- backend:
service:
name: hix-cms
port:
number: 5000
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- hostname: hix.dev
hix-ingress-nginx-controller
Result of kubectl describe configmap -nlb hix-lb-ingress-nginx-controller
Name: hix-lb-ingress-nginx-controller
Namespace: lb
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=hix-lb
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.0
helm.sh/chart=ingress-nginx-4.2.0
Annotations: meta.helm.sh/release-name: hix-lb
meta.helm.sh/release-namespace: lb
Data
====
use-proxy-protocol:
----
true
allow-snippet-annotations:
----
true
gzip-level:
----
5
use-gzip:
----
true
BinaryData
====
Events:
u00
namespaceResult of kubectl get all -nu00
NAME READY STATUS RESTARTS AGE
pod/hix-api-65994c8448-lg682 1/1 Running 0 153m
pod/hix-bg-7b64bb7547-84f5g 1/1 Running 0 153m
pod/hix-cms-75d6fd44cf-7n5zf 1/1 Running 0 153m
pod/hix-web-6478ff6fd9-m8vmz 1/1 Running 0 153m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hix-api ClusterIP 10.245.137.85 <none> 3000/TCP 153m
service/hix-cms ClusterIP 10.245.136.119 <none> 5000/TCP 153m
service/hix-web ClusterIP 10.245.248.229 <none> 4000/TCP 153m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hix-api 1/1 1 1 153m
deployment.apps/hix-bg 1/1 1 1 153m
deployment.apps/hix-cms 1/1 1 1 153m
deployment.apps/hix-web 1/1 1 1 153m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hix-api-65994c8448 1 1 1 153m
replicaset.apps/hix-bg-7b64bb7547 1 1 1 153m
replicaset.apps/hix-cms-75d6fd44cf 1 1 1 153m
replicaset.apps/hix-web-6478ff6fd9 1 1 1 153m
As you can see, all is up and running.
If it matters, I created the LoadBalancer using terraform’s digitalocean_loadbalancer
and then update it with terraform’s helm_release
in order to add DO’s Service annotations and extra ConfigMap data fields.
At some point I observed that it kinda picks up on HTTP 200 after the tf helm_release
updates to the annotation tags. I first thought that their misconfiguration is the guilty party, but then realized that the refresh itself improves the availability for a short time.
I also tried following the ingress-nginx Pod logs, but they don’t say much - now, for HTTP 503 responses they display nothing. Semi-interesting stuff that I’ve seen from time to time are 400 BadRequests. Examples:
10.108.16.7 - - [03/Aug/2022:10:28:41 +0000] "PROXY TCP4 128.14.133.58 159.89.252.251 60552 443" 400 150 "-" "-" 0 0.002 [] [] - - - - 15dab2f1b8bea5d879677b0b8405efb6
10.108.16.7 - - [03/Aug/2022:10:47:44 +0000] "PROXY TCP4 163.123.143.71 159.89.252.251 59478 80" 400 150 "-" "-" 0 0.081 [] [] - - - - e468477c2177f06518a36426f61d8000
10.108.16.7 - - [03/Aug/2022:11:40:56 +0000] "PROXY TCP4 108.185.4.210 159.89.252.251 58449 80" 400 150 "-" "-" 0 0.001 [] [] - - - - 2680cfc751060ce7b9e5f431df101d75
What’s the reason of the “HTTP 503 Service Unavailable. No server is available to handle this request” responses that I receive? What’s my mistake?
Thanks in advance.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
This is your issue. You shouldn’t create a loadbalancer by hand, just create the ingress-nginx Helm release and add the annotations in that, and it will request the creation of the loadbalancer (correctly) automatically.