IP Geolocation, the process used to determine the physical location of an IP address, can be leveraged for a variety of purposes, such as content personalization and traffic analysis. Traffic analysis by geolocation can provide valuable insight into your user base as it allows you to easily see where they are coming from. This can help you make informed decisions about the ideal geographical location(s) of your application servers and who your current audience is.
In this tutorial, we will show you how to create a visual geo-mapping of the IP addresses of your application’s users, by using Elasticsearch, Logstash, and Kibana.
Here’s a short explanation of how it all works. Logstash uses a GeoIP database to convert IP addresses into a latitude and longitude coordinate pair, i.e. the approximate physical location of an IP address. The coordinate data is stored in Elasticsearch in geo_point
fields, and also converted into a geohash
string. Kibana can then read the Geohash strings and draw them as points on a map of the Earth. In Kibana 4, this is known as a Tile Map visualization.
Let’s take a look at the prerequisites now.
To follow this tutorial, you must have a working ELK stack. Additionally, you must have logs that contain IP addresses that can be filtered into a field, like web server access logs. If you don’t already have these two things, you can follow the first two tutorials in this series. The first tutorial will set up an ELK stack, and the second one will show you how to gather and filter Nginx or Apache access logs:
Assuming you followed the prerequisite tutorials, you have already done this. However, we are including this step again in case you skipped it, because the TileMap visualization requires that your GeoIP coordinates are stored in Elasticsearch as a geo_point
type.
On the server that Elasticsearch is installed on, download the Filebeat index template to your home directory:
- cd ~
- curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Then load the template into Elasticsearch with this command:
- curl -XPUT 'http://localhost:9200/_template/filebeat' -d@filebeat-index-template.json
To get Logstash to store GeoIP coordinates, you need to identify an application that generates logs that contain a public IP address that you can filter as a discrete field. A fairly ubiquitous application that generates logs with this information is a web server, such as Nginx or Apache. We will use Nginx access logs as the example. If you’re using different logs, make the necessary adjustments to the example.
In the Adding Filters to Logstash tutorial, the Nginx filter is stored in a file called 11-nginx-filter.conf
. If your filter is located elsewhere, edit that file instead.
Let’s edit the Nginx filter now:
- sudo vi /etc/logstash/conf.d/11-nginx-filter.conf
Under the grok
section, add the highlighted portion below:
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => "%{NGINXACCESS}" }
}
geoip {
source => "clientip"
}
}
}
This configures the filter to convert an IP address stored in the clientip
field (specified in source). We are specifying the source as clientip
because that is the name of the field that the Nginx user IP address is being stored in. Be sure to change this value if you are storing the IP address information in a different field.
Save and exit.
To put the changes into effect, let’s restart Logstash:
- sudo service logstash restart
If everything was configured correctly, Logstash should now be storing the GeoIP coordinates with your Nginx access logs (or whichever application is generating the logs). Note that this change is not retroactive, so your previously gathered logs will not have GeoIP information added. Let’s verify that the GeoIP functionality is working properly in Kibana.
The easiest way to verify if Logstash was configured correctly, with GeoIP enabled, is to open Kibana in a web browser. Do that now.
Find a log message that your application generated since you enabled the GeoIP module in Logstash. Following the Nginx example, we can search Kibana for type: "nginx-access"
to narrow the log selection.
Then expand one of the messages to look at the table of fields. You should see some new geoip
fields that contain information about how the IP address was mapped to a real geographical location. For example:
Note: If you don’t see any logs, generate some by accessing your application, and ensure that your time filter is set to a recent time.
Also note that Kibana may not be able to resolve a geolocation for every IP address. If you’re just testing with one address and it doesn’t seem to be working, try some others before troubleshooting.
If, after all that, you don’t see any GeoIP information (or if it’s incorrect), you probably did not configure Logstash properly.
If you see proper GeoIP information in this view, you are ready to create your map visualization.
Note: If you haven’t used Kibana visualizations yet, check out the Kibana Dashboards and Visualizations Tutorial.
To map out the IP addresses in Kibana, let’s create a Tile Map visualization.
Click Visualize in the main menu.
Under Create a new visualization, select Tile map.
Under Select a search source you may select either option. If you have a saved search that will find the log messages that you want to map, feel free to select that search. We will proceed as if you clicked From a new search.
When prompted to Select an index pattern choose filebeat-* from the dropdown. This will take you to a page with a blank map:
In the search bar, enter type: nginx-access
or another search term that will match logs that contain geoip information. Make sure your time period (upper right corner of the page) is sufficient to match some log entries. If you see No results found instead of the map, you need to update your search terms or time.
Once you have some results, click Geo Coordinates underneath the buckets header in the left-hand column. The green “play” button will become active. Click it, and your geolocations will be plotted on the map:
When you are satisfied with your visualization, be sure to save it using the Save Visualization button (floppy disk icon) next to the search bar.
Now that you have your GeoIP information mapped out in Kibana, you should be set. By itself, it should give you a rough idea of the geographical location of your users. It can be even more useful if you correlate it with your other logs by adding it to a dashboard.
Good luck!
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.
This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.
Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame. This series will teach you how to install Logstash and Kibana on Ubuntu, then how to add more filters to structure your log data. Then it will teach you how to use Kibana.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Things to note, if you’re using ELK 5.0 and encounter this error:
[ERROR][logstash.filters.geoip ] The GeoLite2 MMDB database provided is invalid or corrupted. {:exception=>com.maxmind.db.InvalidDatabaseException: Could not find a MaxMind DB metadata marker in this file (GeoLiteCity.dat). Is this a valid MaxMind DB file?
It’s because you downloaded the legacy version, download the GeoLite2 City at: curl -O “http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz”
Followed every part of the tutorial ,
Getting the Error ,
No Compatible Fields: The “[filebeat-]YYYY.MM.DD” index pattern does not contain any of the following field types: geo_point
have searched Google and unable to find the root cause of this error , can you help
Any pointers on how to do this for fluentd (using logstash format) instead of logstash?
Hi,
Thanks for the nice tutorial.
Just to mention, you may need to change the mappings for geoip.location field and set it to ‘geo_point’ or else you won’t be able to get the Field ‘geiop.location’ displayed under Geo Co-ordinates > Field when you try to create a MAP.
I was using an index called ‘apache’ so initially the filed was set to double
I had to change it to ‘geo-point’ to get the filed on the map.
To change the mappings:
This is 2017. ELK stack is known as Elastic Stack now.
Other than GeoIP, we can also use the IP2Location filter in Logstash.
https://www.ip2location.com/tutorials/how-to-use-ip2location-filter-plugin-with-elastic-stack
Great article. Any examples of doing this same thing, but instead coming from an IIS weblog instead of Nginx?
Is there a reason to use this config?
I believe that the below config can do the same, the “location” field contains longitude and latitude and can be used in Kibana tile map.
Conflict on
geoip.location
and the floowing error:Strange how I have 20 tabs open here to try and figure this one out. It looks like a common problem with no clear solution for the ELk stack beginner. I came here cause I wanted solutions not more problems…
I followed the tutorial and was getting this error:
This is how I solved it.
My server is CentOS 7 with these versions of relevant packages: elasticsearch-2.3.5-1.noarch filebeat-1.2.3-1.x86_64 kibana-4.5.4-1.x86_64 logstash-2.3.4-1.noarch
I use filebeat as the collector on all my nodes and my index pattern is ‘*filebeat-**’
Contents of: 30-elasticsearch-output.conf
My Apache filter file is like the nginx one in the tutorial above.
I continued attempting to resolve the error by reading the instructions here: (Note: I’m not at all certain that the steps at these two links are required. I implemented them while trying to resolve the issue and did not try the solution without these steps. I have a suspicion that they are nor necessary) https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html Which pointed me to apply the changes here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-percolate.html#geo-percolate and then back to here where I applied the first option: https://www.elastic.co/guide/en/elasticsearch/reference/current/geo-point.html
But I still got the same error.
Eventually, I saw ckyconsultinguk’s comment above from April 22, 2015 and adapted his solution a bit.
Based on his suggestions I edited the file: elasticsearch-template.json (Mine was located here: ‘/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch’ ) and changed the value of “template” from “logstash-" to "filebeat-”:
(This saved me the step of having to add a template entry in the output file and since I do not use the ‘logstash-*’ index I do not need the original template.)
I then: Stopped logstash Deleted my old index
DELETE /filebeat-*
(THIS WILL DELETE ALL OF YOUR ‘filebeat-*’ DATA) Restarted ES Started logstash and went to sleep happy. :-)I also found a similar implementation of this idea here: https://michael.lustfield.net/misc/geo-point-with-elasticsearch-2x
What a neat tutorial … hats off to author and team … keep up good work !! SF From SW England ! P.S. Is there any place i can see a example of application log which use log4j logs … btway that’s what i m going to do next …