Report this

What is the reason for this report?

Pod of Ingress Nginx controller generates 400 error continuously.

Posted on November 22, 2021

Hi there,

I installed Ingress Nginx controller by 1-click app installation. But a pod of Ingress Nginx controller generates 400 error every 3 seconds. Log:

...
10.104.0.4 - - [22/Nov/2021:05:11:55 +0000] "PROXY TCP4 10.104.0.4 10.104.0.3 13254 30719" 400 0 "-" "-" 0 0.000 [] [] - - - - a6c2621ff5139e5e5e7985ca53532cb9                 │
│ 10.104.0.4 - - [22/Nov/2021:05:11:58 +0000] "PROXY TCP4 10.104.0.4 10.104.0.3 13256 30719" 400 0 "-" "-" 0 0.000 [] [] - - - - c30fc815d69a307c72ab76197684f3c3                 │
│ 10.104.0.4 - - [22/Nov/2021:05:12:01 +0000] "PROXY TCP4 10.104.0.4 10.104.0.3 13258 30719" 400 0 "-" "-" 0 0.000 [] [] - - - - ff1a2ee7982f3169c807bc6ee4a215b2                 │
│ 10.104.0.4 - - [22/Nov/2021:05:12:04 +0000] "PROXY TCP4 10.104.0.4 10.104.0.3 13260 30719" 400 0 "-" "-" 0 0.000 [] [] - - - - 0268acf70de314c0d92815ccbfcd8270                 │
│ 
...

But there is no pod that is 10.104.0.4.

% k get pods --all-namespaces -o wide
NAMESPACE       NAME                                             READY   STATUS      RESTARTS   AGE   IP             NODE                       NOMINATED NODE   READINESS GATES
ingress-nginx   ingress-nginx-admission-create-wtjsg             0/1     Completed   0          67m   10.244.0.113   pool-stg-u5m64tn0f-ujgc7   <none>           <none>
ingress-nginx   ingress-nginx-admission-patch-g8nbs              0/1     Completed   1          67m   10.244.0.126   pool-stg-u5m64tn0f-ujgc7   <none>           <none>
ingress-nginx   ingress-nginx-controller-5c8d66c76d-c44xr        1/1     Running     0          67m   10.244.0.39    pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     cilium-4bnkr                                     1/1     Running     0          69m   10.104.0.3     pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     cilium-operator-777cf5958d-b4vnj                 1/1     Running     0          74m   10.104.0.3     pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     coredns-85d9ccbb46-9qrn2                         1/1     Running     0          74m   10.244.0.92    pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     coredns-85d9ccbb46-wv2j4                         1/1     Running     0          74m   10.244.0.52    pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     csi-do-node-mfwfl                                2/2     Running     0          69m   10.104.0.3     pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     do-node-agent-z87wj                              1/1     Running     0          69m   10.104.0.3     pool-stg-u5m64tn0f-ujgc7   <none>           <none>
kube-system     kube-proxy-stvs2                                 1/1     Running     0          69m   10.104.0.3     pool-stg-u5m64tn0f-ujgc7   <none>           <none>

Why is this happened? I can’t use Ingress because of this error. Please give any advices.

Thank you in advance.

The step I did

  1. Create cluster
  2. Install Ingress Nginx Controller by Install 1-click app
  3. Edit load balancer’s setting (Forwarding rules) To: HTTP on port 80 HTTP on port 30719 HTTPS on port 443 HTTP on port 32491
  4. (Install postgres operator)


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Hey,

From the log snippet you provided, it seems like there’s a TCP probe to your Nginx Ingress controller that is not being handled properly. These entries indicate that something is making TCP connections to your Ingress controller, but they aren’t formatted as HTTP requests, hence the 400 errors.

Here are a few steps and considerations to troubleshoot and potentially resolve this issue:

  1. Ensure that these health checks are configured correctly for HTTP/HTTPS traffic, as a misconfiguration here might lead to such errors.

  2. Double-check your Ingress resource configurations. Ensure that they are correctly set up to route traffic to your services.

  3. If you have network policies in place, make sure they are not inadvertently blocking or rerouting traffic in a way that could cause these errors.

  4. Ensure that the services your Ingress is pointing to are up and running. Sometimes, if a service endpoint is down or not responding correctly, it can lead to errors at the Ingress level.

  5. If you’re using a proxy protocol in your setup, ensure that it’s correctly configured. Incorrect proxy protocol settings can lead to similar issues.

  6. To identify the source of these requests, you might want to temporarily increase the logging level of your Nginx Ingress controller. This could give you more detailed information about the incoming requests leading to these 400 errors.

  7. Don’t forget to check the Kubernetes events (kubectl get events) and the logs of the Ingress controller pod for any warnings or errors that might give clues.

  8. If you have external monitoring or scanning tools that access your cluster, verify their configuration as they might be sending traffic that causes these errors.

Since you mentioned this happens every 3 seconds, it strongly points to some kind of automated health check or monitoring process. The solution might involve tweaking the configuration of the entity that’s making these requests, be it a load balancer, a monitoring tool, or something else in your infrastructure.

Best of luck!

Bobby

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.