Healthy load balancer with healthy droplets returning 503

I’ve setup a load balancer and attached two droplets that are all in the same region. The health checks are all green and I’m able to reach the two droplets individually by accessing their public IP on port 8080 (My NodeJS app is listening on port 8080). I’m wondering what else I could be missing. The load balancer is configured to forward HTTP on port 80 to HTTP on port 8080 for the droplets. It’s seem odd that I can reach the droplets directly but the load balancer cant. The error I’m getting is: “503 Service Unavailable: No server is available to hand this request”

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

managed to resolve the issue, using this answer. basically a firewall needs to be created allowing traffic for port 80 for the node to come from, for example, everywhere or just the load balancer.

I had the same issue. I decided to install nginx on each droplet and have that handle proxying from port 80 to my node app’s port (4000, in my case). I noticed that after installing - but before configuring the proxy - the load balancer was showing the default nginx welcome page.

This indicates to me that the load balancer 80->4000 forwarding option is not taking any effect at all, and I was previously seeing the 503 error because it was hitting my droplets on port 80.

I’m going to leave the 80->4000 setting in place for now, even after setting up nginx. It’s possible that this forwarding option just takes some time to kick in? I’ll report back if I find out how to do this without nginx. (We’ll see what happens when I get into SSL, too)