By nicoskaralis
We have a pretty big monolithic web app built in rails with background jobs, recurrent workers, websockets and content delivery. After 4 years it is impossible to maintain such a big structure and updates have been a problem. So I’m migrating the code base to kubernetes and splitting the services by docker containers.
The one web server would be split into 4 different containers, each serving a web app split by different domain. The anycable server (and gRPC provider) for each domain would be in a container on its own. The workers would also be on containers… etc
Since kubernetes have a private network for the containers I’m not too worried about internal communication. My problem is with the load balancer and web servers.
My first attempt was to use nginx-ingress. From what I understand the DOKS automatically maps it to a DO load balancer, which doesn’t allow custom rules that I need, specifically this:
location /cable { proxy_pass http://app1_upstream; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection “Upgrade”; proxy_set_header Host $host; }
My second option would be to create a droplet, in the same VPC, that will run nginx, create an upstream for each pod that contains a web app and manually handle the “Connection: Upgrade” header
upstream app1_upstream { server 192.168.100.10:8010; } upstream app2_upstream { server 192.168.100.11:8010; } upstream app3_upstream { server 192.168.100.12:8010; }
The problem with that is that I can’t seem to find a reliable way to update the upstream when a container is replaced (for update or failure) or scaled (add or remove replicas)
I’m new to docker and kubernetes. How can I do this structure? Or at least, how can I find materials that can help me with this?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Hi, Looks like you are trying to use DO Loadbanancer with ingress-nginx controller.
"My first attempt was to use nginx-ingress. From what I understand the DOKS automatically maps it to a DO load balancer, which doesn’t allow custom rules that I need, specifically this: " You do not create or use Loadbalanacer to create your routing rules. After you deploy your Ingress Controller with a Loadbalancer, you have to create a Kubernetes Ingress. Kubernetes Ingress will allow you to do what you are looking for.
After you deploy your Ingress Controller, You can create your Ingress Example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: rails
namespace: your_namespace
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your_domain
http:
paths:
- path: /
backend:
serviceName: name_of_your_rails_service
servicePort: port_of_your_rails_service
- path: /events
backend:
serviceName: "gateway-alias-in-club"
servicePort: 8080
- path: /cable
backend:
serviceName: name_of_your_cable_service
servicePort: port_of_your_cable_service
- path: /your_other_paths
backend:
serviceName: name_of_your_other_service
servicePort: port_of_your_other_service
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.