Question

Nginx 502 bad gateway (Python app)

I have been following this tutorial to get a simple Python app working with NGINX and uwsgi on an Ubuntu server

https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-ubuntu-14-04

I managed to get the ‘hello’ part to display on my ip at port 8080 as requested but then at the end when I am supposed to be able to see that text on the ip and not just port 8080 I get a 502 bad gateway error.

I have been over the code many times and for hrs as well as reading other threads here and on places like Stackoverflow and nothing seems to work. Some people have suggested changing the chmod-socket from 664 to 666 or 600 and neither of those work.

I have been reading this similar thread too

https://www.digitalocean.com/community/questions/how-to-fix-502-bad-gateway-error-with-nginx-uwsgi-for-flask-app

it seems this person has a slightly different nginx con file setup but when Ive tried adding the things such as this to my con file

$ exec uwsgi --ini app.ini

it still doesn’t work, however if I run that command in my terminal I can get the ‘Hello there!’ text to appear on all ports but according to the tutorial Im not supposed to have to run that to get it to work. When I do run that it seems to spawn multiple workers also (not sure if its supposed to do that) and this is the output

*** Operational MODE: preforking *** WSGI app 0 (mountpoint=‘’) ready in 0 seconds on interpreter 0x22a37d0 pid: 17920 (default app) *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 17920) spawned uWSGI worker 1 (pid: 17940, cores: 1) spawned uWSGI worker 2 (pid: 17941, cores: 1) spawned uWSGI worker 3 (pid: 17942, cores: 1) spawned uWSGI worker 4 (pid: 17943, cores: 1) spawned uWSGI worker 5 (pid: 17944, cores: 1)

this is my nginx log output and it looks like there is a problem with the sock file

[crit] 17492#0: *1 connect() to unix:///home/david/myapp/myapp.sock failed (2: No such file or directory) while connecting to upstream, client: 125.24.238.132, server: 128.199.97.37, request: “GET / HTTP/1.1”, upstream: “uwsgi://unix:///home/david/myapp/myapp.sock:”, host: “128.199.97…37”

but I dont know how to fix this, all that is mentioned in the tutorial is that the mayor.ini file should have this line in

$ socket = myapp.sock

which my .ini file has, so what is going on here? Is the sock file not being created or something?

I should add that nginx, uwsgi and my app are all running


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Ryan Quinn
DigitalOcean Employee
DigitalOcean Employee badge
October 6, 2016
Accepted Answer

I think that this may be where you ran into trouble:

Create an Upstart File to Manage the App

The tutorial you linked to is specific to Ubuntu 14.04 but your question is tagged 16.04. Between these releases, Ubuntu switched their init system to systemd and no longer provides proper support for upstart jobs. I would recommend either following that tutorial on Ubuntu 14.04 which is still supported or digging into systemd to create a service file manually.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more