Port 8500 Is refused

October 7, 2019 325 views
Ansible DigitalOcean DigitalOcean Cloud Firewalls

I am trying to install consul using the idealista/consul_role via ansible and I am running into an error:

Timeout when waiting for 127.0.0.1:8500

When I ssh into the server and run:curl localhost:8500
I get: curl: (7) Failed to connect to localhost port 8500: Connection refused

Somehow port 8500 is not allowed. Is it possible that Digital Ocean is blocking this? I looked at my firewall rules and my droplet allows all TCP ports for inbound and outbound (FYI I have other droplets using the same firewall rules and they don’t have the issue)

3 Answers

Hi, firewalls would not normally block localhost connections as it is the node connecting to itself.

Are you sure the service is running and listening on port 8500? To find out, try this command on the console / SSH terminal:

netstat -tnlp

That should list all active TCP listeners as well as the process id (pid) and process name.

If you do not see 8500 in that output then your problem probably lies elsewhere.

Hope this helps and let us know how it is going.

  • No, I am not seeing it there.

    Do you think consul would add 8500 to there once installed? I’m wondering if I should try to install manually instead of via ansible.

Yes, in order to connect to port 8500 something has to be listening. This happens after install AND starting the service (maybe ansible doesn’t start it?)

You could try checking syslog or other logs for errors. Maybe try starting the service manually, as it might display the problem right there on the screen.

If ansible experiences an issue it should report an error too as well so double-check that - might be something straightforward.

  • This is what ansible replies with on install from ansible using idealista/consul_role :

    TASK [idealista.consul-role : Consul | Wait for Consul to start up] ************
    fatal: [php7c]: FAILED! => {"changed": false, "elapsed": 60, "msg": "Timeout when waiting for 127.0.0.1:8500"}
    
    • I see. I would try manually starting consul and hopefully it will print the error right on screen for us.

      Here’s some additional info on consul error logging:

      https://support.hashicorp.com/hc/en-us/articles/115015668287-Where-are-my-Consul-logs-and-how-do-I-access-them

      • If I run consul info I get:

        Error querying agent: Get http://127.0.0.1:8500/v1/agent/self: dial tcp 127.0.0.1:8500: connect: connection refused
        

        Ran sudo systemctl restart consul but received no output.

        Seem there is a log folder /var/logs/consul but it is empty. (Not seeing /startup/init/systemd but maybe I’m not looking correctly.)

        • I peeked at the role in github, some of it is 2 years old so it’s possible that your version of ubuntu is using a service manager that is newer than the role expects.

          Let’s try this page’s instructions out:

          https://learn.hashicorp.com/consul/getting-started/agent

          This will try to launch consul in “dev” mode - useful for troubleshooting.

          If that doesn’t work out then yes you might consider trying a manual install or finding a more up to date / current ansible role.

          Good luck and let us know what happens.

          • Nice. Now it runs:

            xeno@php7c:~$ consul agent -dev
            ==> Starting Consul agent...
            ==> Consul agent running!
                       Version: 'v1.2.1'
                       Node ID: 'asdfasdfasdfasdfasdfasdf'
                     Node name: 'php7c'
                    Datacenter: 'dc1' (Segment: '<all>')
                        Server: true (Bootstrap: false)
                   Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
                  Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
                       Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
            
            ==> Log data will now stream in as it occurs:
            
                2019/10/07 15:42:42 [DEBUG] agent: Using random ID "7d6c3f82-b133-2ebb-0a0a-e1bc317454f3" as node ID
                2019/10/07 15:42:42 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:adsfasdfasdf Address:127.0.0.1:8300}]
                2019/10/07 15:42:42 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
                2019/10/07 15:42:42 [INFO] serf: EventMemberJoin: php7c.dc1 127.0.0.1
                2019/10/07 15:42:42 [INFO] serf: EventMemberJoin: php7c 127.0.0.1
                2019/10/07 15:42:42 [INFO] consul: Adding LAN server php7c (Addr: tcp/127.0.0.1:8300) (DC: dc1)
                2019/10/07 15:42:42 [INFO] consul: Handled member-join event for server "php7c.dc1" in area "wan"
                2019/10/07 15:42:42 [DEBUG] agent/proxy: managed Connect proxy manager started
                2019/10/07 15:42:42 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
                2019/10/07 15:42:42 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
                2019/10/07 15:42:42 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
                2019/10/07 15:42:42 [INFO] agent: started state syncer
                2019/10/07 15:42:42 [WARN] raft: Heartbeat timeout from "" reached, starting election
                2019/10/07 15:42:42 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
                2019/10/07 15:42:42 [DEBUG] raft: Votes needed: 1
                2019/10/07 15:42:42 [DEBUG] raft: Vote granted from 7d6c3f82-b133-2ebb-0a0a-e1bc317454f3 in term 2. Tally: 1
                2019/10/07 15:42:42 [INFO] raft: Election won. Tally: 1
                2019/10/07 15:42:42 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
                2019/10/07 15:42:42 [INFO] consul: cluster leadership acquired
                2019/10/07 15:42:42 [INFO] consul: New leader elected: php7c
                2019/10/07 15:42:42 [INFO] connect: initialized CA with provider "consul"
                2019/10/07 15:42:42 [DEBUG] consul: Skipping self join check for "php7c" since the cluster is too small
                2019/10/07 15:42:42 [INFO] consul: member 'php7c' joined, marking health alive
                2019/10/07 15:42:42 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
                2019/10/07 15:42:42 [INFO] agent: Synced node info
            ==> Newer Consul version available: 1.6.1 (currently running: 1.2.1)
                2019/10/07 15:42:45 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
                2019/10/07 15:42:45 [DEBUG] agent: Node info in sync
                2019/10/07 15:42:45 [DEBUG] agent: Node info in sync
            

            I was able to ssh into a different ssh window and now I am able to curl to port 8500 without an error.

            I guess how do we have it run normally.

          • Somehow I think this part of the ansible task is not running correctly in the role:

                - name: Register systemd service
                  systemd:
                    name: consul
                    enabled: true
                    daemon_reload: true
                    state: started
            
            
          • When I run sudo service consul start I get no output. Then I run sudo journalctl -xe I see this output:

            -- A new session with the ID 23 has been created for the user git.
            --
            -- The leading process of the session is 25261.
            Oct 07 16:48:05 php7c systemd[1]: Started Session 23 of user git.
            -- Subject: Unit session-23.scope has finished start-up
            -- Defined-By: systemd
            -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
            --
            -- Unit session-23.scope has finished starting up.
            --
            -- The start-up result is done.
            Oct 07 16:48:05 php7c sshd[25295]: Received disconnect from 10.136.67.30 port 42428:11: disconnected by user
            Oct 07 16:48:05 php7c sshd[25295]: Disconnected from 10.136.67.30 port 42428
            Oct 07 16:48:05 php7c sshd[25261]: pam_unix(sshd:session): session closed for user git
            Oct 07 16:48:05 php7c systemd-logind[1535]: Removed session 23.
            -- Subject: Session 23 has been terminated
            -- Defined-By: systemd
            -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
            -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
            --
            -- A session with the ID 23 has been terminated.
            Oct 07 16:49:18 php7c sudo[25216]: pam_unix(sudo:session): session closed for user root
            Oct 07 16:49:41 php7c sudo[25299]:     myuser : TTY=pts/1 ; PWD=/home/myuser ; USER=root ; COMMAND=/bin/journalctl -xe
            Oct 07 16:49:41 php7c sudo[25299]: pam_unix(sudo:session): session opened for user root by myuser(uid=0)
            lines 2404-2427/2427 (END)
            

Thanks @baraboom for your help.

I think your original suggestion was correct to start the service and see the errors.I was able to figure out the issue. It was failing because of the invalid config key logfile. I ended up removing “logfile”: “{{ consul_logfile }}”, from consul.json.j2 and it all reinstalled perfectly.

Not sure if it is because I am using 1.2.1.

  • So glad to hear it! Good job troubleshooting through the issue. Let us know if you have any other questions.

Have another answer? Share your knowledge.