Port 8500 Is refused

Posted October 7, 2019 13.8k views
DigitalOceanAnsibleDigitalOcean Cloud Firewalls

I am trying to install consul using the idealista/consul_role via ansible and I am running into an error:

Timeout when waiting for

When I ssh into the server and run:curl localhost:8500
I get: curl: (7) Failed to connect to localhost port 8500: Connection refused

Somehow port 8500 is not allowed. Is it possible that Digital Ocean is blocking this? I looked at my firewall rules and my droplet allows all TCP ports for inbound and outbound (FYI I have other droplets using the same firewall rules and they don’t have the issue)

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
3 answers

Hi, firewalls would not normally block localhost connections as it is the node connecting to itself.

Are you sure the service is running and listening on port 8500? To find out, try this command on the console / SSH terminal:

netstat -tnlp

That should list all active TCP listeners as well as the process id (pid) and process name.

If you do not see 8500 in that output then your problem probably lies elsewhere.

Hope this helps and let us know how it is going.

Yes, in order to connect to port 8500 something has to be listening. This happens after install AND starting the service (maybe ansible doesn’t start it?)

You could try checking syslog or other logs for errors. Maybe try starting the service manually, as it might display the problem right there on the screen.

If ansible experiences an issue it should report an error too as well so double-check that - might be something straightforward.

  • This is what ansible replies with on install from ansible using idealista/consul_role :

    TASK [idealista.consul-role : Consul | Wait for Consul to start up] ************
    fatal: [php7c]: FAILED! => {"changed": false, "elapsed": 60, "msg": "Timeout when waiting for"}
    • I see. I would try manually starting consul and hopefully it will print the error right on screen for us.

      Here’s some additional info on consul error logging:

      • If I run consul info I get:

        Error querying agent: Get dial tcp connect: connection refused

        Ran sudo systemctl restart consul but received no output.

        Seem there is a log folder /var/logs/consul but it is empty. (Not seeing /startup/init/systemd but maybe I’m not looking correctly.)

        • I peeked at the role in github, some of it is 2 years old so it’s possible that your version of ubuntu is using a service manager that is newer than the role expects.

          Let’s try this page’s instructions out:

          This will try to launch consul in “dev” mode - useful for troubleshooting.

          If that doesn’t work out then yes you might consider trying a manual install or finding a more up to date / current ansible role.

          Good luck and let us know what happens.

          • Nice. Now it runs:

            xeno@php7c:~$ consul agent -dev
            ==> Starting Consul agent...
            ==> Consul agent running!
                       Version: 'v1.2.1'
                       Node ID: 'asdfasdfasdfasdfasdfasdf'
                     Node name: 'php7c'
                    Datacenter: 'dc1' (Segment: '<all>')
                        Server: true (Bootstrap: false)
                   Client Addr: [] (HTTP: 8500, HTTPS: -1, DNS: 8600)
                  Cluster Addr: (LAN: 8301, WAN: 8302)
                       Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
            ==> Log data will now stream in as it occurs:
                2019/10/07 15:42:42 [DEBUG] agent: Using random ID "7d6c3f82-b133-2ebb-0a0a-e1bc317454f3" as node ID
                2019/10/07 15:42:42 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:adsfasdfasdf Address:}]
                2019/10/07 15:42:42 [INFO] raft: Node at [Follower] entering Follower state (Leader: "")
                2019/10/07 15:42:42 [INFO] serf: EventMemberJoin: php7c.dc1
                2019/10/07 15:42:42 [INFO] serf: EventMemberJoin: php7c
                2019/10/07 15:42:42 [INFO] consul: Adding LAN server php7c (Addr: tcp/ (DC: dc1)
                2019/10/07 15:42:42 [INFO] consul: Handled member-join event for server "php7c.dc1" in area "wan"
                2019/10/07 15:42:42 [DEBUG] agent/proxy: managed Connect proxy manager started
                2019/10/07 15:42:42 [INFO] agent: Started DNS server (udp)
                2019/10/07 15:42:42 [INFO] agent: Started DNS server (tcp)
                2019/10/07 15:42:42 [INFO] agent: Started HTTP server on (tcp)
                2019/10/07 15:42:42 [INFO] agent: started state syncer
                2019/10/07 15:42:42 [WARN] raft: Heartbeat timeout from "" reached, starting election
                2019/10/07 15:42:42 [INFO] raft: Node at [Candidate] entering Candidate state in term 2
                2019/10/07 15:42:42 [DEBUG] raft: Votes needed: 1
                2019/10/07 15:42:42 [DEBUG] raft: Vote granted from 7d6c3f82-b133-2ebb-0a0a-e1bc317454f3 in term 2. Tally: 1
                2019/10/07 15:42:42 [INFO] raft: Election won. Tally: 1
                2019/10/07 15:42:42 [INFO] raft: Node at [Leader] entering Leader state
                2019/10/07 15:42:42 [INFO] consul: cluster leadership acquired
                2019/10/07 15:42:42 [INFO] consul: New leader elected: php7c
                2019/10/07 15:42:42 [INFO] connect: initialized CA with provider "consul"
                2019/10/07 15:42:42 [DEBUG] consul: Skipping self join check for "php7c" since the cluster is too small
                2019/10/07 15:42:42 [INFO] consul: member 'php7c' joined, marking health alive
                2019/10/07 15:42:42 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
                2019/10/07 15:42:42 [INFO] agent: Synced node info
            ==> Newer Consul version available: 1.6.1 (currently running: 1.2.1)
                2019/10/07 15:42:45 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
                2019/10/07 15:42:45 [DEBUG] agent: Node info in sync
                2019/10/07 15:42:45 [DEBUG] agent: Node info in sync

            I was able to ssh into a different ssh window and now I am able to curl to port 8500 without an error.

            I guess how do we have it run normally.

          • Somehow I think this part of the ansible task is not running correctly in the role:

                - name: Register systemd service
                    name: consul
                    enabled: true
                    daemon_reload: true
                    state: started
          • When I run sudo service consul start I get no output. Then I run sudo journalctl -xe I see this output:

            -- A new session with the ID 23 has been created for the user git.
            -- The leading process of the session is 25261.
            Oct 07 16:48:05 php7c systemd[1]: Started Session 23 of user git.
            -- Subject: Unit session-23.scope has finished start-up
            -- Defined-By: systemd
            -- Support:
            -- Unit session-23.scope has finished starting up.
            -- The start-up result is done.
            Oct 07 16:48:05 php7c sshd[25295]: Received disconnect from port 42428:11: disconnected by user
            Oct 07 16:48:05 php7c sshd[25295]: Disconnected from port 42428
            Oct 07 16:48:05 php7c sshd[25261]: pam_unix(sshd:session): session closed for user git
            Oct 07 16:48:05 php7c systemd-logind[1535]: Removed session 23.
            -- Subject: Session 23 has been terminated
            -- Defined-By: systemd
            -- Support:
            -- Documentation:
            -- A session with the ID 23 has been terminated.
            Oct 07 16:49:18 php7c sudo[25216]: pam_unix(sudo:session): session closed for user root
            Oct 07 16:49:41 php7c sudo[25299]:     myuser : TTY=pts/1 ; PWD=/home/myuser ; USER=root ; COMMAND=/bin/journalctl -xe
            Oct 07 16:49:41 php7c sudo[25299]: pam_unix(sudo:session): session opened for user root by myuser(uid=0)
            lines 2404-2427/2427 (END)

Thanks @baraboom for your help.

I think your original suggestion was correct to start the service and see the errors.I was able to figure out the issue. It was failing because of the invalid config key logfile. I ended up removing “logfile”: “{{ consul_logfile }}”, from consul.json.j2 and it all reinstalled perfectly.

Not sure if it is because I am using 1.2.1.