Tutorial

How To Install and Configure Elasticsearch on Ubuntu 18.04

Published on April 22, 2020
Default avatar

By Erin Glass

Senior Manager, DevEd

English
How To Install and Configure Elasticsearch on Ubuntu 18.04
Not using Ubuntu 18.04?Choose a different version or distribution.
Ubuntu 18.04

A previous version of this article was written by Toli.

Introduction

Elasticsearch is a platform for distributed search and analysis of data in real time. It is a popular choice due to its usability, powerful features, and scalability.

This article will guide you through installing Elasticsearch, configuring it for your use case, securing your installation, and beginning to work with your Elasticsearch server.

Prerequisites

Before following this tutorial, you will need:

For this tutorial, we will work with the minimum amount of CPU and RAM required to run Elasticsearch. Note that the amount of CPU, RAM, and storage that your Elasticsearch server will require depends on the volume of logs that you expect.

Step 1 — Installing Elasticsearch

The Elasticsearch components are not available in Ubuntu’s default package repositories. They can, however, be installed with APT after adding Elastic’s package source list.

All of the packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Packages which have been authenticated using the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the Elastic package source list in order to install Elasticsearch.

To begin, use cURL, the command line tool for transferring data with URLs, to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the cURL command into the apt-key program, which adds the public GPG key to APT.

  1. curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Next, add the Elastic source list to the sources.list.d directory, where APT will look for new sources:

  1. echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Next, update your package lists so APT will read the new Elastic source:

  1. sudo apt update

Then install Elasticsearch with this command:

  1. sudo apt install elasticsearch

Elasticsearch is now installed and ready to be configured.

Step 2 — Configuring Elasticsearch

To configure Elasticsearch, we will edit its main configuration file elasticsearch.yml where most of its configuration options are stored. This file is located in the /etc/elasticsearch directory.

Use your preferred text editor to edit Elasticsearch’s configuration file. Here, we’ll use nano:

  1. sudo nano /etc/elasticsearch/elasticsearch.yml

Note: Elasticsearch’s configuration file is in YAML format, which means that we need to maintain the indentation format. Be sure that you do not add any extra spaces as you edit this file.

The elasticsearch.yml file provides configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file but you can change them according to your needs. For the purposes of our demonstration of a single-server configuration, we will only adjust the settings for the network host.

Elasticsearch listens for traffic from everywhere on port 9200. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its [REST API] (https://en.wikipedia.org/wiki/Representational_state_transfer). To restrict access and therefore increase security, find the line that specifies network.host, uncomment it, and replace its value with localhost so it looks like this:

/etc/elasticsearch/elasticsearch.yml
. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .

We have specified localhost so that Elasticsearch listens on all interfaces and bound IPs. If you want it to listen only on a specific interface, you can specify its IP in place of localhost. Save and close elasticsearch.yml. If you’re using nano, you can do so by pressing CTRL+X, followed by Y and then ENTER .

These are the minimum settings you can start with in order to use Elasticsearch. Now you can start Elasticsearch for the first time.

Start the Elasticsearch service with systemctl. Give Elasticsearch a few moments to start up. Otherwise, you may get errors about not being able to connect.

  1. sudo systemctl start elasticsearch

Next, run the following command to enable Elasticsearch to start up every time your server boots:

  1. sudo systemctl enable elasticsearch

With Elasticsearch enabled upon startup, let’s move on to the next step to discuss security.

Step 3 — Securing Elasticsearch

By default, Elasticsearch can be controlled by anyone who can access the HTTP API. This is not always a security risk because Elasticsearch listens only on the loopback interface (that is, 127.0.0.1), which can only be accessed locally. Thus, no public access is possible and as long as all server users are trusted, security may not be a major concern.

If you need to allow remote access to the HTTP API, you can limit the network exposure with Ubuntu’s default firewall, UFW. This firewall should already be enabled if you followed the steps in the prerequisite Initial Server Setup with Ubuntu 18.04 tutorial.

We will now configure the firewall to allow access to the default Elasticsearch HTTP API port (TCP 9200) for the trusted remote host, generally the server you are using in a single-server setup, such as198.51.100.0. To allow access, type the following command:

  1. sudo ufw allow from 198.51.100.0 to any port 9200

Once that is complete, you can enable UFW with the command:

  1. sudo ufw enable

Finally, check the status of UFW with the following command:

  1. sudo ufw status

If you have specified the rules correctly, the output should look like this:

Output
Status: active To Action From -- ------ ---- 9200 ALLOW 198.51.100.0 22 ALLOW Anywhere 22 (v6) ALLOW Anywhere (v6)

The UFW should now be enabled and set up to protect Elasticsearch port 9200.

If you want to invest in additional protection, Elasticsearch offers the commercial Shield plugin for purchase.

Step 4 — Testing Elasticsearch

By now, Elasticsearch should be running on port 9200. You can test it with cURL and a GET request.

  1. curl -X GET 'http://localhost:9200'

You should see the following response:

Output
{ "node.name" : "My First Node", "cluster.name" : "mycluster1", "version" : { "number" : "2.3.1", "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39", "build_timestamp" : "2020-04-04T12:25:05Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" }

If you see a response similar to the one above, Elasticsearch is working properly. If not, make sure that you have followed the installation instructions correctly and you have allowed some time for Elasticsearch to fully start.

To perform a more thorough check of Elasticsearch execute the following command:

  1. curl -XGET 'http://localhost:9200/_nodes?pretty'

In the output from the above command you can verify all the current settings for the node, cluster, application paths, modules, and more.

Step 5 — Using Elasticsearch

To start using Elasticsearch, let’s first add some data. Elasticsearch uses a RESTful API, which responds to the usual CRUD commands: create, read, update, and delete. To work with it, we’ll use the cURL command again.

You can add your first entry like so:

  1. curl -XPOST -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'

You should receive the following response:

Output
{"_index":"tutorial","_type":"helloworld","_id":"1","_version":2,"result":"updated","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":1,"_primary_term":1}

With cURL, we have sent an HTTP POST request to the Elasticsearch server. The URI of the request was /tutorial/helloworld/1 with several parameters:

  • tutorial is the index of the data in Elasticsearch.
  • helloworld is the type.
  • 1 is the ID of our entry under the above index and type.

You can retrieve this first entry with an HTTP GET request.

  1. curl -X GET -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'

This should be the resulting output:

Output
{"_index":"tutorial","_type":"helloworld","_id":"1","_version":1,"found":true,"_source":{ "message": "Hello, World!" }}

To modify an existing entry, you can use an HTTP PUT request.

  1. curl -X PUT -H "Content-Type: application/json" 'localhost:9200/tutorial/helloworld/1?pretty' -d '
  2. {
  3. "message": "Hello, People!"
  4. }'

Elasticsearch should acknowledge successful modification like this:

Output
{ "_index" : "tutorial", "_type" : "helloworld", "_id" : "1", "_version" : 2, "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "created" : false }

In the above example we have modified the message of the first entry to “Hello, People!”. With that, the version number has been automatically increased to 2.

You may have noticed the extra argument pretty in the above request. It enables human-readable format so that you can write each data field on a new row. You can also “prettify” your results when retrieving data to get a more readable output by entering the following command:

  1. curl -X GET -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1?pretty'

Now the response will be formatted for a human to parse:

Output
{ "_index" : "tutorial", "_type" : "helloworld", "_id" : "1", "_version" : 2, "found" : true, "_source" : { "message" : "Hello, People!" } }

We have now added and queried data in Elasticsearch. To learn about the other operations please check the API documentation.

Conclusion

You have now installed, configured, and begun to use Elasticsearch. Since the original release of Elasticsearch, Elastic has developed three additional tools — Logstash, Kabana, and Beats — to be used in conjunction with Elasticsearch as part of the Elastic Stack. Used together, these tools allow you to search, analyze, and visualize logs generated from any source and in any format in a practice known as centralized logging. To get started with the Elastic Stack on Ubuntu 18.04, please see our guide How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about us


About the authors
Default avatar

Senior Manager, DevEd

Open source advocate and lover of education, culture, and community.

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel