Hi, DigitalOcean community!
I need to expose service on each worker node. I have a running pod, which is listening on 443 port. Please, take a look at the service manifest:
kind: Service
apiVersion: v1
metadata:
name: relay-service
namespace: {{.Values.global.namespace_prefix}}{{.Values.global.s3sync.namespace}}
labels:
app: {{ .Chart.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: NodePort
selector:
name: {{ .Chart.Name }}
ports:
- name: tcp
port: 443
targetPort: 443
protocol: TCP
Service has been successfully created:
Olegs-MacBook-Pro:helm mukolaich$ kubectl get svc --all-namespaces --kubeconfig=../frakubeconfig
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default relay-service NodePort 10.245.249.99 <none> 443:32168/TCP 19h
Here is more detailed service data:
Olegs-MacBook-Pro:helm mukolaich$ kubectl describe svc sling-relay-service --kubeconfig=../frakubeconfig
Name: relay-service
Namespace: default
Labels: app=s3sync
chart=s3sync-1.0.0
heritage=Helm
release=relay
Annotations: <none>
Selector: name=s3sync
Type: NodePort
IP: 10.245.249.99
Port: tcp 443/TCP
TargetPort: 443/TCP
NodePort: tcp 32168/TCP
Endpoints: 10.244.0.18:443,10.244.0.217:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
And unfortunately, I can access it using external worker IPs. From my point of view, it must work with the current approach (following basic Kubernetes principles).
I’ve opened the attached firewall to the Kubernetes as well.
Could someone point me in the right direction why it isn’t working?
Thanks, Oleg
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Ok, I found this setup working for external clients when they go to the 32168 port (automatically assigned NodePort). I need to use 443, but NodePort is limited to use 30000-32767 range. So, I will prefer to use an ingress for routing traffic.
Going to keep this question for the community members who need to deal with NodePort in DOKS.
Good luck
One more, ingress working on L4, but I have TCP traffic there.
So, the only ingress with type LoadBalancer can be used, and it will create external Cloud LB (I don’t wanna have LB for each node).
It seems hostPort is the best solution here.
Example configuration, I can confirm it’s working:
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: pod-name
image: docker-image
ports:
- containerPort: 8086
hostPort: 8086
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.