Question

NodePort not always source-natting so it seems

Hi there,

I’m running MySQL in a StatefulSet and I’m using a NodePort service to ensure other droplets can access the MySQL service (and using external-dns for easier configuration).

When I dig at mysql.cloud.domain.org I get the following answer section (so they get loadbalanced over all my nodes) which is exactly as I expect.:

;; ANSWER SECTION:
mysql.cloud.domain.org.	30	IN	A	10.110.0.8
mysql.cloud.domain.org.	30	IN	A	10.110.0.9
mysql.cloud.domain.org.	30	IN	A	10.110.0.7
» kubectl get nodes -o=wide
NAME                STATUS   ROLES    AGE   VERSION    INTERNAL-IP   EXTERNAL-IP       OS-IMAGE                       KERNEL-VERSION    CONTAINER-RUNTIME
pool-basic-cj1le    Ready    <none>   16d   v1.21.11   10.110.0.8    178.*.*.190    Debian GNU/Linux 10 (buster)   4.19.0-17-amd64   containerd://1.4.13
pool-db-cg05r       Ready    <none>   33h   v1.21.11   10.110.0.7    165.*.*.79     Debian GNU/Linux 10 (buster)   4.19.0-17-amd64   containerd://1.4.13
pool-db-cj1af       Ready    <none>   16d   v1.21.11   10.110.0.9    206.*.*.63      Debian GNU/Linux 10 (buster)   4.19.0-17-amd64   containerd://1.4.13

The StatefulSet runs on the node with the name ‘pool-db-cg05r’. Everything works as expected, however, if the DNS round-robin chooses my node where the MySQL instance runs on (pool-db-cg05r or 10.110.0.7 in this case), it seems that snatting is not working properly, the source IP does not get modified to the internal IP of that node, but it uses the external one. Here’s a small experiment:

Here is the result of connecting to a random node(s):

» mysql -p -h 10.110.0.8 -P30306 -e 'select user()'
Enter password:
+---------------------+
| user()              |
+---------------------+
| nicolas@10.110.0.8 |
+---------------------+
» mysql -p -h 10.110.0.9 -P30306 -e 'select user()'
Enter password:
+--------------------+
| user()             |
+--------------------+
| nicolas@10.110.0.9 |
+--------------------+

Here is the result of connecting to the node where mysql runs on: » mysql -p -h 10.110.0.7 -P30306 -e ‘select user()’

Enter password:
ERROR 1045 (28000): Access denied for user 'nicolas'@'165.*.*.79' (using password: YES)

You can see, it uses the external IP of the node ‘165...79’ instead of the internal one ‘10.110.0.7’, so I assume there’s something wrong with the source-natting on the nodeport.

Here is the service definition:

apiVersion: v1
kind: Service
metadata:
  name: mysql-production-service-proxy
  annotations:
    kubernetes.digitalocean.com/firewall-managed: "false"
    external-dns.alpha.kubernetes.io/hostname: mysql.cloud.domain.org
    external-dns.alpha.kubernetes.io/ttl: "30"
    external-dns.alpha.kubernetes.io/access: private
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 30306
  type: NodePort
  selector:
    app: mysql
    env: production

Any clues on how I can ensure that it forces source-natting from the internal IP every time?

Best regards,

Nicolas


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up