Andrew SB
How To Use the DigitalOcean ELK Stack One-Click Application
How To Use the DigitalOcean ELK Stack One-Click Application
We hope you find this tutorial helpful. In addition to guides like this one, we provide simple cloud infrastructure for developers. Learn more →

How To Use the DigitalOcean ELK Stack One-Click Application

Posted Jan 28, 2015 34.7k views Logging One-Click Install Apps DigitalOcean Ubuntu

The DigitalOcean ELK Stack One-Click Application provides you with a quick way to launch a centralized logging server. The ELK Stack is made up of three key pieces of software: Elasticsearch, Logstash, and Kibana. Together they allow you to collect, search, and analyze logs files from across your infrastructure. Logstash collects and parses the incoming logs, Elasticsearch indexes them, and Kibana gives you a powerful web interface to visualize the data.

This tutorial will show you how to launch an ELK instance and set up Filebeat on your other servers to send their logs to your new centralized logging server.

Creating the ELK Stack Droplet

To begin using it, simply create a droplet and specify your hostname and size. It’s recommended that you run ELK Stack on at least a 2GB droplet.

Select your desired region:

Select ELK Stack on Ubuntu 14.04 from the Applications tab:

If you use SSH keys to manage your droplets, which are more secure than passwords and are recommended, you can also specify which ones you want added to this server.

Access Your Kibana Credentials

Once your server has been spun up, you will be able to access the Kibana frontend in a web browser via its IP address. However, this has been password protected. In order to retrieve the randomly generated password, you will need to access the server via the command line.

You can log into your droplet with the following command:

ssh root@your_ip_address

If you are prompted for a password, type in the password that was emailed to you when the server was created. Alternately, if you set up the droplet with SSH keys, you can go ahead and log in without the need for a password.

Once you are logged in, you will see the message of the day (MOTD) which contains your password. It will look like this:

Thank you for using DigitalOcean's ELK Stack Application.

Your Kibana instance can be accessed at
Your Kibana login credentials are:
Username: admin

Now that you have your login credentials, you can access Kibana by entering its IP address in your browser and providing your username and password.

Using Kibana

Kibana is highly configurable. You can create custom dashboards with filtered searches and visualizations of you data. By default, the ELK One-Click is set up to collect the syslog and Nginx access log from the droplet itself. So you should already have data to look at when you first login.

In order to begin viewing your data, you first must configure an index pattern. This can be done by selecting [filebeat]-YYY.MM.DD from the Index Patterns menu on the left and then clicking the Star button to set it as the default index.

Click Discover in the top navigation bar to view the logs that have already been created:

To explore more ways to visualize your data, check out the following tutorial:

Forwarding Logs

In order to send logs to your ELK server, you will need to install and configure Filebeat on your other servers. This tutorial is focused on installing it on Ubuntu, but you can also forward logs from servers running CentOS as well.

We will now configure a client server to send it's syslog to your ELK server.

Installing The SSL Certificate

In order to encrypt traffic when sending your logs to your ELK server, a self-signed SSL certificate is created on first boot. You must install this certificate on each client server. On your ELK server, run this command to copy the SSL certificate to a client server:

  • scp /etc/pki/tls/certs/logstash-forwarder.crt user@client.ip.address:/tmp

The SSL certificate will now be present in the /tmp folder on the client server. Now you must install it to the correct directory:

sudo mkdir -p /etc/pki/tls/certs
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Installing Filebeat

On client server, add the Logstash Forwarder repository to your APT sources and download its signing key:

  • wget -O - | sudo apt-key add -
  • echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/filebeat.list

Then install the Logstash Forwarder package:

  • sudo apt-get update
  • sudo apt-get install filebeat

Next, you will want to ensure that Logstash Forwarder will automatically start on boot:

  • sudo update-rc.d filebeat defaults

Configure Filebeat

On the client server, create and edit Logstash Forwarder's configuration file, which is in YAML format:

  • sudo nano /etc/filebeat/filebeat.yml

The file will include many commented out options. Here we will use the defaults in most cases. We will configure Filebeat to connect to your ELK server on port 5044 and use the SSL certificate that you installed earlier. The paths section specifies which log files to send, and the document_type section specifies that these logs are of type "syslog" (which is the type that our filter is looking for).

Removing the commented out options and substituting in your ELK server's IP address for elk_server_IP, your Filebeat configuration would look like:

        - /var/log/auth.log
        - /var/log/syslog
      input_type: log
      document_type: syslog
    hosts: ["elk_server_IP:5044"]
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Save and quit. Now restart Filebeat to put our changes into place:

sudo service filebeat restart

Now Filebeat is sending syslog to your ELK Server!

Automatically Install On New Droplets

You will need to repeat this process for all of your other existing servers that you wish to gather logs for, but you can streamline this process when you create a new server using DigitalOcean's metadata service. When creating a new droplet, you can provide a cloud-config file which will automatically configure Logstash Forwarder as your droplet first boots.

In order to do so, you must copy the contents of the SSL certificate from your ELK server. You can view the file by running:

cat /etc/pki/tls/certs/logstash-forwarder.crt

Now you can create a cloud-config file automating the steps we took above:

  - content: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    path: /etc/pki/tls/certs/logstash-forwarder.crt
  - content: |
              - /var/log/auth.log
              - /var/log/syslog
            input_type: log
            document_type: syslog
          hosts: ["HOST_IP_ADDR:5044"]
            certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
    path: /etc/filebeat/filebeat.yml
  - wget -O - | sudo apt-key add -
  - echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/filebeat.list
  - sudo apt-get update
  - sudo apt-get install filebeat
  - sudo update-rc.d filebeat defaults
  - sudo service filebeat start

Make sure to replace the contents of the certificate with your own, including the lines with BEGIN and END, as well as the IP address of your ELK server.

Now, when creating a new droplet, you can paste this file into the Enable User Data field:

As your new server comes online, new data will start flowing to your ELK server and be visible in Kibana.

Further information

In order to process the logs it receives, Logstash needs to filter the files and extract formatted data as different log files have very different formats. These filters are installed to /etc/logstash/conf.d/ By default, the ELK Stack application has filters for syslog and for Nginx's access log. For instance, here is /etc/logstash/conf.d/10-syslog.conf

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

For more information on how to write Logstash filters, and an example Apache filter, check out this tutorial:


Creative Commons License