Can't access website by domain name, but can by IP

Posted May 25, 2019 80k views
DNSUbuntu 16.04

I submitted this question to support about a month ago, but as I haven’t received a response yet I figured I’d ask the community:

I have a droplet setup with an IP of On this droplet, I have setup a website (I setup ssh authentication and can use SFTP to transfer my website to the server, I also setup NGINX and a ufw firewall per DigitalOcean instructions). I also have the domain “” registered through Google Domains. In Google Domains, I’m using Google’s name servers and have setup a custom resource “A” record pointing the domain to the IP of However, when I try and enter “” in my browser, it will not connect. I can access the website by typing in the IP directly, but can’t access it by the domain name.

I have already contacted Google Domains about this and they told me that, from their end, everyone looks setup correctly, and that I should wait 48 hours for the DNS to propagate. I’ve waited over 3 days now, and it’s still not working, and when I contacted them again they told me that, once again, everything looks fine on their end, so I should contact the place that is hosting my server. I’m not sure what steps to take at this point to get my domain working.

Thanks in advance for any help!

1 comment
  • Do you have a firewall setup within your DO Control Panel? is so you will need to allow access via ports 80 and 443, this is true for UFW.

    Once that is working and you at least get the “Welcome to NGINX” screen, you’re just left with configuring NGINX to serve your application.

    by Mitchell Anicas
    by Erika Heidi
    UFW (uncomplicated firewall) is a firewall configuration tool that runs on top of `iptables`, included by default within Ubuntu distributions. It provides a streamlined interface for configuring common firewall use cases via the command line. This cheat sheet-style guide provides a quick reference to common UFW use cases and commands, including examples of how to allow and block services by port, network interface, and source IP address.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
2 answers

Hey there!

First, I just want to apologize, as you didn’t seem to receive our reply to your ticket. One of my colleagues was able to address your ticket on April 29, but it doesn’t look like that made it to you!

I can see that your domain is trying to connect through port 443 (https://) which seems to be blocked:

$ telnet 443
telnet: connect to address Operation timed out

telnet: Unable to connect to remote host

I can also see that your domain loads fine with http:// and can be verified using the below curl result:

$ curl -IL
HTTP/1.1 200 OK

Server: nginx/1.10.3 (Ubuntu)

Date: Mon, 29 Apr 2019 06:27:19 GMT

Content-Type: text/html

Content-Length: 2349

Last-Modified: Sun, 28 Apr 2019 00:51:20 GMT

Connection: keep-alive

ETag: “5cc4f908-92d”

Accept-Ranges: bytes

That being the case, you’ll want to check a few things.

You can list the services that are running on your Droplet with the following command:
sudo lsof -iTCP -sTCP:LISTEN -P

Do you see Nginx running on port 443 on the list? If yes, the most likely issue is that your firewall is impacting connections. Please ensure that Port 443 is allowed through any firewalls you have running on the Droplet.

If you do not see it there, the most likely issue is a wrongly configured server block. You will need to check your Nginx configuration and ensure that you have a VirtualHost set up for your domain on Port 443. Here’s some further documentation that will help:

Hope this helps!

  • Thank you very much for the response. I saw your response to my ticket after reading this post so my apologies. My question is, is it required to get my site HTTPS compatible to get it working at all? Meaning, at least for now, is there an easier way to get it working with just HTTP? Because right now I can’t even access the website through I feel I’d rather get it working with just http before messing around with getting a signing certificate to make it https compliant. Sorry if that question doesn’t make sense, I’m still new to all of this.

    When I run sudo ufw status I get:

    Status: active
    To                         Action      From
    --                         ------      ----
    Nginx HTTP                 ALLOW       Anywhere
    OpenSSH                    ALLOW       Anywhere
    Nginx HTTPS                ALLOW       Anywhere
    Nginx HTTP (v6)            ALLOW       Anywhere (v6)
    OpenSSH (v6)               ALLOW       Anywhere (v6)
    Nginx HTTPS (v6)           ALLOW       Anywhere (v6)

    So I thought that would allow all the connections you listed. Also, when I disable the ufw firewall completely I still can’t connect, so I thought it wasn’t a firewall issue at all. I also don’t have any firewall settings configured through digitalocean, so I believe the only firewall I’m running is ufw.

    When I run the command you listed, the output I get is:

    sshd     1638     root    3u  IPv4   54696      0t0  TCP *:22 (LISTEN)
    sshd     1638     root    4u  IPv6   54698      0t0  TCP *:22 (LISTEN)
    nginx   10349     root    6u  IPv4 1211599      0t0  TCP *:80 (LISTEN)
    nginx   10349     root    7u  IPv6 1211600      0t0  TCP *:80 (LISTEN)
    nginx   10350 www-data    6u  IPv4 1211599      0t0  TCP *:80 (LISTEN)
    nginx   10350 www-data    7u  IPv6 1211600      0t0  TCP *:80 (LISTEN)

    So I guess nginx isn’t running on port 443? So maybe I do have to make my website working with HTTPS but I’d rather just have it be HTTP for simplicity’s sake right now.

  • Actually I was able to get it working, thank you so much! I just ended up getting an HTTPS certificate for my server, so now NGINX is running on Port 443. I went to this site to get the HTTPS Certification: which made it really easy for my Ubuntu 16.04 and NGINX configuration. Thanks so much for the help!

Wow, just spent a half day tracking down this problem. This problem is Chrome & Firefox now force .dev (and .foo) domains to HTTPS via preloaded HSTS. To fix this you an either serve via HTTPS or disable network.stricttransportsecurity.preloadlist in about:config in Firefox. You can edit chrome://net-internals/#hsts in Chrome but it does not allow users to delete preloaded HSTS settings!! More links below: