As I was following through the tutorial on the same topic: https://docs.digitalocean.com/products/kubernetes/getting-started/operational-readiness/enable-https/#step-3-enable-proxy-protocol
i was unable to get proxy protocol working. i did modify the files to use my own files and domain.
my Dockerfile
:
FROM node:23-alpine AS base
FROM base AS dependencies
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json pnpm-lock.yaml* ./
RUN corepack enable pnpm && pnpm i --frozen-lockfile;
FROM base AS builder
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED=1
RUN corepack enable pnpm && pnpm run build;
FROM base AS runner
RUN apk add --no-cache nginx supervisor
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
ENV NEXT_TELEMETRY_DISABLED=1
COPY /app/supervisor/default.conf /etc/supervisor/conf.d
COPY /app/nginx/default.conf /etc/nginx/http.d/default.conf
COPY /app/.next/standalone ./
COPY /app/.next/static ./.next/static
COPY /app/public ./public
EXPOSE 80
CMD ["supervisord", "-c", "/etc/supervisor/conf.d"]
my nginx/default.conf
server {
charset utf-8;
listen 80 default_server;
listen [::]:80 default_server;
server_name <redacted>;
location / {
proxy_pass http://localhost:3000;
proxy_cache_bypass $http_upgrade;
}
}
my supervisor/default.conf
[program:nodejs]
command=env HOSTNAME=0.0.0.0 node server.js
directory=/app
autostart=true
autorestart=true
stdout_logfile_maxbytes=0
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stderr_logfile=/dev/stderr
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
stdout_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stderr_logfile=/var/log/nginx/error.log
my k8s/deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextjs
namespace: next-k8s
spec:
selector:
matchLabels:
app: nextjs
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: nextjs
spec:
containers:
- name: nextjs
image: ghcr.io/<redacted>/<redacted>:latest
resources:
requests:
memory: "256Mi"
cpu: "50m"
limits:
memory: "512Mi"
cpu: "100m"
ports:
- containerPort: 80
protocol: TCP
my k8s/service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nextjs-svc
namespace: next-k8s
spec:
selector:
app: nextjs
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 80
protocol: TCP
my k8s/certificate-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nextjs
namespace: next-k8s
annotations:
cert-manager.io/issuer: letsencrypt-nextjs
spec:
ingressClassName: nginx
tls:
- hosts:
- <redacted>
secretName: letsencrypt-nextjs
rules:
- host: <redacted>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextjs-svc
port:
number: 80
my nginx-values.yaml
controller:
replicaCount: 2
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
kubernetes.digitalocean.com/load-balancer-id: <redacted>
config:
use-proxy-protocol: "true"
i applied the files in this order
kubectl apply -f ./k8s/certificate-ingress.yaml
kubectl apply -f ./k8s/deployment.yaml
kubectl apply -f ./k8s/service.yaml
helm upgrade ingress-nginx ingress-nginx/ingress-nginx --version 4.11.2 -n ingress-nginx -f ./nginx/values.yaml
in my dashboard under Neworking i have a domain added which points to the loadbalancers ip. when i: curl -Li <redacted>
i get the following error:curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to <redacted>:443
Inside of the pod i’ve also tried wget http://localhost:80
or wget http://localhost:3000
(no curl because of the alpine image) but i get the following error:
Connecting to localhost:80 ([::1]:80)
wget: server returned error: HTTP/1.1 500 Internal Server Error
what did i do wrong?
there’s multiple places on the documentation that tell the process of enabling proxy-protocol
for ingress-nginx
but none have worked for me:
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hey!
As far as I can tell, though haven’t tested this on my end, the TLS passthrough is causing SSL errors because your backend Nginx isn’t configured for HTTPS. So the load balancer would send/passthrough HTTPS traffic to your Nginx service that does not handle HTTPS traffic resulting in SSL mismatches.
You can try to update your
nginx-values.yaml
:Regarding the 500 error, if you were to check the logs of your app, do you see the actual problem?
Feel free to share the error here.
- Bobby