SSL Error: broken header when request was made from inside of kubernetes cluster

Posted July 31, 2020 4.7k views
KubernetesDigitalOcean Managed Kubernetes

I set up my k8s cluster. I set up service with ingress and letsecnrypt certificates. It works fine when I make requests outside of cluster:

# from local machine (outside of cluster)
$ curl https://my_domain/ping -iv
*   Trying
* Connected to my_domain ( port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=my_domain
*  start date: Jul 31 11:59:10 2020 GMT
*  expire date: Oct 29 11:59:10 2020 GMT
*  subjectAltName: host "my_domain" matched cert's "my_domain"
*  issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5640e5c44100)
> GET /ping HTTP/2
> Host: my_domain
> User-Agent: curl/7.66.0
> Accept: */*
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200 
HTTP/2 200 
< server: nginx/1.17.10
server: nginx/1.17.10
< date: Fri, 31 Jul 2020 12:59:24 GMT
date: Fri, 31 Jul 2020 12:59:24 GMT
< content-type: text/plain; charset=utf-8
content-type: text/plain; charset=utf-8
< content-length: 4
content-length: 4
< strict-transport-security: max-age=15724800; includeSubDomains
strict-transport-security: max-age=15724800; includeSubDomains

* Connection #0 to host my_domain left intact

But when I make same request from inside of kubernetes cluster I have an error:

# From nginx-ingress-controller pod (inside of kubernetes cluster)
$ curl https://my_domain/ping -iv
*   Trying
* Connected to my_domain ( port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to my_domain:443 
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to my_domain:443

Could you explain this behaviour please? This is very unclear to me.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
1 answer

Hi there!

There is a known issue in the kubernetes project( connections from within the cluster accessing public urls to the cluster. When traffic that originates from a Kubernetes pod goes through the Load Balancer, the kube-proxy service intercepts the traffic and directs it to the service instead of letting it go out to the Load Balancer and back to the node. This can cause a variety of issues depending on the service that receives these requests. In DOKS this often results in: “timeout”, “bad header”, or SSL error messages.

Current options for workaround are the following:

Access DOKS services through their resolvable service names or by using the desired services clusterIP.

or using this service annotation on newer(1.14+) clusters:

Hope this helps!



  • I followed these options, none of them worked for me. What a great 3 days of my life…

  • @jkwiatkoski

    I have Ingress configured on the LoadBalancer, would there be a solution to the backing service behind the Ingress? The solutions posted seem to rely on the Service being of type LoadBalancer, but the services I’m interested in calling over public DNS are proxied by Ingress to a ClusterIP.


    • @jonathanDolphin Yes if you are also seeing this error with your ingress controller as the LB service you would still want to apply the above hostname annotation to the ingress service. That should prevent kube-proxy from interfering with the connection and allow it to go out to the LB.

      Let me know if this is not resolving your issue, and we can dig further.

      • Thanks,

        I think I asked this a bit weird, since the Ingress is doing virtual hosting for a few hostnames, I’m wondering if it’s possible to apply this to the Services behind the ingress.


        Ingress => Service some-app
        Ingress => Service some-other-app

        So, since it’s the same ingress controller listening on the same LoadBalancer IP, would I be able to apply the hostname to some-app and some-other-app?

        I just tried putting the annotations on the backing Services and it didn’t seem to work.

        • Yes you really only need to add 1 hostname to the service to prevent the error from occurring.
          Do you have any logs you can share of the error happening?
          Does it only occur when connecting from a local service or all the time?

          • I only need to add hostname to the service with the LoadBalancer? As in I only need to add it to the Ingress Service?

          • Ah sorry for not being clear enough. The annotations will only evewr have any affect and should only ever be used on services that are of type LoadBalancer. They will have no affect on any other types of services.

            If you are trying to reach from service A to service B via a LB to the cluster, The annotation should be placed on whatever service provisioned the LB whether that be service B or an ingress service.

            Does that clear up the confusion?