After running the following command:
$ sudo filebeat setup
I ran into the following error:
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://10.104.0.5:5601/api/status fails: invalid character 'K' looking for beginning of value. Response: Kibana server is not ready yet.
Any ideas on how to resolve this problem? Thanks in advance.
]]>The previous tutorials in this series guided you through how to install and configure Suricata. They also explained how to use Filebeat to send alerts from your Suricata server to an Elastic Stack server, to be used with its built-in Security Information and Event Management (SIEM) functionality.
In this final tutorial in the series, you will create custom Kibana rules and generate alerts within Kibana’s SIEM dashboards. Once you have rules in place and understand where and how to filter Suricata’s logs using Kibana, you’ll explore how to create and manage cases using Kibana’s timeline analysis tools.
By the end of this tutorial you will have a SIEM system that you can use to track and investigate security events across all of the servers in your network.
If you have been following this tutorial series, you should already have a server with at least 4GB RAM and 2 CPUs, and a non-root user configured. For the purposes of this guide, you can set this up by following our initial server setup guides for either Ubuntu 20.04, Debian 11, or Rocky Linux 8, depending on your operating system of choice.
You will also need Suricata installed and running on your server. If you need to install Suricata on your server, you can do so using one of the following tutorials depending on your operating system:
You will also need a server running the Elastic Stack and configured so that Filebeat can send logs from your Suricata server to Elasticsearch. If you need to create an Elastic Stack server, use one of the tutorials from the following list that matches your operating system:
Ensure that you can login to Kibana on your Elasticsearch server, and that there are events in the various Suricata Alerts and Events dashboards.
Once you have all the prerequisites in place, open an SSH tunnel to your Kibana server and login to Kibana with your browser using the credentials that you generated in the previous tutorial.
Before you can create rules, alerts, and timelines in Kibana, you need to enable an xpack
security module setting.
Open your /etc/elasticsearch/elasticsearch.yml
file with nano
or your preferred editor.
- sudo nano /etc/elasticsearch/elasticsearch.yml
Add the following highlighted line to the end of the file:
. . .
discovery.type: single-node
xpack.security.enabled: true
xpack.security.authc.api_key.enabled: true
Save and close the file when you are done editing. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Now restart Elasticsearch so that the new setting takes effect.
- sudo systemctl restart elasticsearch.service
You’re now ready to configure rules, examine alerts, and create timelines and cases in Kibana.
To use Kibana’s SIEM functionality with Suricata event data, you will need to create rules that will generate alerts about incoming events. Visit the Rules Dashboard in Kibana’s security app page to create or import rules.
For the purposes of this tutorial, we will use the following signatures to detect traffic directed to a server on mismatched ports (SSH, HTTP, and TLS traffic respectively):
alert ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000000;)
alert ssh any any -> 2001:DB8::1/32 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000001;)
alert http any any -> 203.0.113.5 !80 (msg:"HTTP REQUEST on non-HTTP port"; classtype:misc-activity; sid:1000002;)
alert http any any -> 2001:DB8::1/32 !80 (msg:"HTTP REQUEST on non-HTTP port"; classtype:misc-activity; sid:1000003;)
alert tls any any -> 203.0.113.5 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; classtype:misc-activity; sid:1000004;)
alert tls any any -> 2001:DB8::1/32 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; classtype:misc-activity; sid:1000005;)
If you are using your own signatures, or those from a rule set, ensure that you can generate alerts and that you can access the corresponding events in the default Suricata dashboards in Kibana.
Now visit the Rules page in Kibana’s Security app http://localhost:5601/app/security/rules/. Click the Create new rule button in the top right of the page.
Ensure that the Custom query rule type card is selected. Scroll to the Custom query input field and paste the following into it:
rule.id: "1000000" or rule.id :"1000001"
Ensure that your rule.id
values match Suricata’s sid
value for the attack or attacks that you would like to alert about.
Change the Query quick preview drop down to Last Month and then click Preview Results. Assuming you have matching events in your Suricata logs, the page will update in place with a graph that shows alerts from the last month. Your page should resemble the following screenshot:
Click Continue to proceed to adding a name to the Rule Name field, which is required for every rule you add. Add a name to the Rule Name field. In this example we’ll use the message description from the Suricata rule SSH TRAFFIC on non-SSH port
. Add a description for the rule as well. We’ll use Check for SSH connection attempts on non-standard ports
in this example.
You can also expand the Advanced Settings section and add details about the rule. For example, you could add an explanation about how to handle an alert generated by the rule, or link to security researcher’s articles about a particular attack type.
When you are done adding the rule name, description, and optional extra fields, click Continue to proceed to Step 3 of creating the rule.
Leave the next Schedule rule section settings with their default values and click Continue.
Finally, on the Rule actions step, click Create & activate rule.
You will be redirected to a new page that shows details about the rule:
Note: It can take a few minutes for alert data to populate at first. This delay is because the rule’s default schedule is to run every 5 minutes.
If there are other Suricata rules that you would like alerts about, repeat the above steps, substituting the signature’s sid
into Kibana’s custom query rule.id
field.
Once you have a rule or rules in place, you are ready to proceed to the next step where you’ll examine alerts and create a case or cases to manage them.
Now that you have a rule or rules configured to generate alerts in Kibana’s SIEM app, you’ll need a way to further group and manage alerts. To get started, visit Kibana’s alerts dashboard: http://127.0.0.1:5601/app/security/alerts.
Be sure that you have generated some invalid traffic that matches the Suricata signature or signatures that you are using. For instance, you could trigger the example sid:1000000
Suricata rule by running a command like the following from your local machine:
- ssh -p 80 your_server_ip
This command will try connecting to your server using SSH on port 80, instead of the default port 22 and should trigger an alert. It may take a few minutes for the alert to show up in Kibana, since it has to be processed by Elasticsearch and the rule that you created in Kibana.
Next, you’ll add the comnunity_id
field to the table of alerts that is displayed at the bottom of the page. Recall from the first tutorial that this field is generated by Suricata and represents the unique IP addresses and ports contained in a network flow. Click the Fields button and in the modal dialog that pops up, enter network.community_id
and then tick the check box beside the field name:
Close the modal and the field will be added to the table of alerts. Now hover over any of the alerts that with the same community_id
value and click the Add to timeline investigation icon. This will ensure that all alerts that share the community_id
that Suricata added to the event are added to a timeline for further investigation:
Next click the Untitled Timeline link at the bottom left of your browser. This link will take you to a page that only displays alerts with the Suricata community_id
field that you want to investigate.
The timeline page shows you more detail about individual packets that are associated with an alert, or network flow. You can use the timeline to get a better idea of when a suspicious network flow started, where it originated, and how long it lasted.
Click the All data sources button on the right side of the page and select the Detection Alerts button, then click Save. This option will restrict the timeline to only display alerts that Kibana generates. Without this option, Suricata’s alerts will also be included in the timeline.
To save your new timeline, click the pencil icon at the top left of the timeline page. The following screenshot highlights where to find the pencil icon, and the All data sources button:
You can add text to the description field if there is additional information that you want to add to the timeline. Once you are done editing the timeline name and description, click the Save button at the bottom right of the modal dialog.
Repeat the above steps to create timelines for other alerts that you would like to examine in more depth later.
In the next step, you’ll use your timeline views of events to attach alerts to Kibana’s Cases app in the SIEM suite of tools.
In the previous step, you created a timeline to group individual alerts and packets together based on Suricata’s community_id
field. In this section of the tutorial you will create a Case to track and manage the alerts in your timeline.
To create a new case from your timeline, ensure that you are on a timeline page in your browser. Click the Attach to case button at the top right of the timeline page, and then the Attach to new case button from the list that appears.
You will be redirected to a page where you can input information about the incident that you are investigating. In the following example screenshot, the case is our example SSH Traffic on a non-SSH port alerts:
Fill out the fields with a descriptive name and optional tag or tags. In this example the name of the case is SSH TRAFFIC on non-SSH port from 203.0.113.5
since that is the specific type of traffic and host that we’re investigating. With many events to investigate in your SIEM system, a naming scheme like this will help you keep track of cases, timelines, and alerts since the name will correspond to the Kibana alert, and to the Suricata signature’s message field.
Scroll to the bottom of the page and click the Create case button. Your browser will be on a page that shows the saved case. You can add comments in Markdown format with additional information, as well as edit the case from this page.
Next, click the link in the description to go to the case’s timeline that you added in the previous step of this tutorial.
For each alert that you would like to include in the case, click the More actions icon on the alert. Click Add to existing case.
Click the case name in the modal that pops up to add the alert to the case. Be sure to select the case that corresponds to the timeline and alerts that you are investigating. Repeat adding each alert in the list to the existing case.
Now visit the Cases app again in Kibana](http://localhost:5601/app/security/cases) using the navigation menu on the left side of the page. Click on your case and note how the alerts that you added are listed in the case details:
From here you can scroll to the bottom of the case and add any additional information that you would like. For example, any steps that you have taken to investigate an alert or alerts, configuration changes to Suricata like a new or edited rule, escalation to another team member or anything else that is relevant to the case.
Once you are comfortable with creating cases for the various types of alerts that you want to keep track of, you can now use Kibana’s SIEM tools to organize and coordinate investigating any alert in one central location.
In this tutorial you built on your existing Suricata and Elastic Stack SIEM system by adding rules to Kibana that generate alerts about specific traffic of interest. You also created a timeline or timelines to group sets of alerts based on their community_id
. Finally, you created a case and linked your timeline to it, along with the individual alerts of interest.
With this SIEM system in place, you can now track security events across your systems at almost any scale. As you become more familiar with Suricata and track the alerts that it generates in your Kibana SIEM, you will be able to customize the Suricata alerts and default actions that it takes to suit your particular network.
For more information about Kibana’s SIEM tools, visit the official Elastic Security Documentation. The guides there explain how to use Rules, Alerts, Timelines, and Cases in much more detail.
For a more lightweight SIEM interface, you might also be interested in EveBox, which presents all of the Suricata event data and SIEM functionality on a single page.
]]>The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.
In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and Rocky Linux 8. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.
The components that you will use to build your own SIEM are:
eve.json
log file and send each event to Elasticsearch for processing.First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json
logs to Elasticsearch.
Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.
If you have been following this tutorial series then you should already have Suricata running on a Rocky Linux server. This server will be referred to as your Suricata server.
You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a Rocky Linux 8 server with:
For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.
The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:
- sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next, create an elasticsearch.repo
file in your /etc/yum/yum.repos.d
directory with the following contents, using vi
or your preferred editor. This ensures that the upstream Elasticsearch repositories will be used when installing new packages via yum
:
- sudo vi /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
If you are using vi
, when you are finished making changes, press ESC
and then :x
to write the changes to the file and quit.
Now install Elasticsearch and Kibana using the dnf
command. Press Y
to accept any prompts about GPG key fingerprints:
- sudo dnf install --enablerepo=elasticsearch elasticsearch kibana
The --enablerepo
option is used to override the default disabled setting in the /etc/yum.repos.d/elasticsearch.repo
file. This approach ensures that the Elasticsearch and Kibana packages do not get accidentally upgraded when you install other package updates to your server.
Once you are done installing the packages, find and record your server’s private IP address using the ip address show
command:
- ip -brief address show
You will receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64
eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64
The private network interface in this output is the highlighted eth1
device, with the IPv4 address 10.137.0.5
. Your device name, and IP addresses will be different. Regardless of your device name and private IP address, the address will be from the following reserved blocks:
10.0.0.0
to 10.255.255.255
(10/8 prefix)172.16.0.0
to 172.31.255.255
(172.16/12 prefix)192.168.0.0
to 192.168.255.255
(192.168/16 prefix)If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)
Record the private IP address for your Elasticsearch server (in this case 10.137.0.5
). This address will be referred to as your_private_ip
in the remainder of this tutorial. Also note the name of the network interface, in this case eth1
. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.
Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack
security module.
Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface.
Open the /etc/elasticsearch/elasticsearch.yml
file using vi
or your preferred editor:
- sudo vi /etc/elasticsearch/elasticsearch.yml
Find the commented out #network.host: 192.168.0.1
line between lines 50–60 and add a new line after it that configures the network.bind_host
setting, as highlighted below:
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
network.bind_host: ["127.0.0.1", "your_private_ip"]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
Substitute your private IP in place of the your_private_ip
address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.
Next, go to the end of the file using the vi
shortcut SHIFT+G
.
Add the following highlighted lines to the end of the file:
. . .
discovery.type: single-node
xpack.security.enabled: true
The discovery.type
setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled
setting turns on some of the security features that are included with Elasticsearch.
Save and close the file when you are done editing it.
Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using firewalld
, run the following commands:
- sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
- sudo firewall-cmd --permanent --zone=internal --add-service=elasticsearch
- sudo firewall-cmd --permanent --zone=internal --add-service=kibana
- sudo systemctl reload firewalld.service
Substitute your private network interface name in place of eth1
in the first command if yours is different. That command changes the interface rules to use the internal
Firewalld zone, which is more permissive than the default public
zone.
The next commands add rules to allow Elasticsearch traffic on port 9200 and 9300, along with Kibana traffic on port 5601.
The final command reloads the Firewalld service with the new permanent rules in place.
Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack
security module.
Now that you have configured networking and the xpack
security settings for Elasticsearch, you need to start it for the changes to take effect.
Run the following systemctl
command to start Elasticsearch:
- sudo systemctl start elasticsearch.service
Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.
Now that you have enabled the xpack.security.enabled
setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin
directory that can automatically generate random passwords for these users.
Run the following command to cd
to the directory and then generate random passwords for all the default users:
- cd /usr/share/elasticsearch/bin
- sudo ./elasticsearch-setup-passwords auto
You will receive output like the following. When prompted to continue, press y
and then RETURN
or ENTER
:
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Changed password for user apm_system
PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
Changed password for user kibana_system
PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
Changed password for user kibana
PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
Changed password for user logstash_system
PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
Changed password for user beats_system
PASSWORD beats_system = 2p81hIdAzWKknhzA992m
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
Changed password for user elastic
PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system
user’s password in the next section of this tutorial, and the elastic
user’s password in the Configuring Filebeat step of this tutorial.
At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack
security module.
In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.
First you’ll enable Kibana’s xpack
security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.
xpack.security
in KibanaTo get started with xpack
security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.
You can generate the required encryption keys using the kibana-encryption-keys
utility that is included in the /usr/share/kibana/bin
directory. Run the following to cd
to the directory and then generate the keys:
- cd /usr/share/kibana/bin/
- sudo ./kibana-encryption-keys generate -q --force
The -q
flag suppresses the tool’s instructions, and the --force
flag will ensure that you create new keys. You should receive output like the following:
Outputxpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Copy these three keys somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml
configuration file.
Open the file using vi
or your preferred editor:
- sudo vi /etc/kibana/kibana.yml
Go to the end of the file using the vi
shortcut SHIFT+G
. Paste the three xpack
lines that you copied to the end of the file:
. . .
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Keep the file open and proceed to the next section where you will configure Kibana’s network settings.
To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost"
line in /etc/kibana/kibana.yml
. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
server.host: "your_private_ip"
Substitute your private IP in place of the your_private_ip
address.
Save and close the file when you are done editing it. Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.
There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml
configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.
We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly.
If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username
and elasticsearch.password
.
If you choose to edit the configuration file, skip the rest of the steps in this section.
To add a secret to the keystore using the kibana-keystore
utility, first cd
to the the /usr/share/kibana/bin
directory. Next, run the following command to set the username for Kibana:
- cd /usr/share/kibana/bin
- sudo ./kibana-keystore add elasticsearch.username
You will receive a prompt like the following:
Enter value for elasticsearch.username: *************
Enter kibana_system
when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an *
asterisk character. Press ENTER
or RETURN
when you are done entering the username.
Now repeat the process, this time to save the password. Be sure to copy the password for the kibana_system
user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl
.
Run the following command to set the password:
- sudo ./kibana-keystore add elasticsearch.password
When prompted, paste the password to avoid any transcription errors:
Enter value for elasticsearch.password: ********************
Now that you have configured networking and the xpack
security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.
Run the following systemctl
command to restart Kibana:
- sudo systemctl start kibana.service
Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.
Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.
To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:
- sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next, create an elasticsearch.repo
file in your /etc/yum/yum.repos.d
directory with the following contents, using vi
or your preferred editor:
- sudo vi /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
When you are finished making changes save and exit the file. Now install the Filebeat package using the dnf
command:
- sudo dnf install --enablerepo=elasticsearch filebeat
Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml
configuration file using vi
or your preferred editor:
- sudo vi /etc/filebeat/filebeat.yml
Find the Kibana
section of the file around line 100. Add a line after the commented out #host: "localhost:5601"
line that points to your Kibana instance’s private IP address and port:
. . .
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
host: "your_private_ip:5601"
. . .
This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.
Next, find the Elasticsearch Output
section of the file around line 130 and edit the hosts
, username
, and password
settings to match the values for your Elasticsearch server:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["your_private_ip:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "6kNbsxQGYZ2EQJiqJpgl"
. . .
Substitute in your Elasticsearch server’s private IP address on the hosts
line. Uncomment the username
field and leave it set to the elastic
user. Change the password
field from changeme
to the password for the elastic
user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.
Save and close the file when you are done editing it. Next, enable Filebeats’ built-in Suricata module with the following command:
- sudo filebeat modules enable suricata
Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.
Run the filebeat setup
command. It may take a few minutes to load everything:
- sudo filebeat setup
Once the command finishes you should receive output like the following:
OutputOverwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
Loaded machine learning job configurations
Loaded Ingest pipelines
If there are no errors, use the systemctl
command to start Filebeat. It will begin sending events from Suricata’s eve.json
log to Elasticsearch once it is running.
- sudo systemctl start filebeat.service
Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.
Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.
SSH has an option -L
that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.
On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.
Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:
- ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N
The various arguments to SSH are:
-L
flag forwards traffic to your local system on port 5601
to the remote server.your_private_ip:5601
portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip
.203.0.113.5
address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.-N
flag instructs SSH to not run a command like an interactive /bin/bash
shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.If you would like to close the tunnel at any time, press CTRL+C
.
On Windows your terminal should resemble the following screenshot:
Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER
or RETURN
.
On macOS and Linux your terminal will be similar to the following screenshot:
Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:
If your browser cannot connect to Kibana you will receive a message like the following in your terminal:
Outputchannel 3: open failed: connect failed: No route to host
This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.
Log in to your Kibana server using elastic
for the Username, and the password that you copied earlier in this tutorial for the user.
Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.
In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata
. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:
Click the [Filebeat Suricata] Events Overview
result to visit the Kibana dashboard that shows an overview of all logged Suricata events:
To visit the Suricata Alerts dashboard, repeat the search or click the Alerts
link that is included in the Events dashboard. Your page should resemble the following screenshot:
If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.
Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:
You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.
In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack
security module that is included with each tool.
After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.
Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.
The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.
]]>The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.
In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and Debian 11. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.
The components that you will use to build your own SIEM tool are:
eve.json
log file and send each event to Elasticsearch for processing.First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json
logs to Elasticsearch.
Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.
If you have been following this tutorial series then you should already have Suricata running on an Debian 11 server. This server will be referred to as your Suricata server.
You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a Debian 11 server with:
For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.
The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add the Elastic source list to the sources.list.d
directory, where apt
will search for new sources:
- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Now update your server’s package index and install Elasticsearch and Kibana:
- sudo apt update
- sudo apt install elasticsearch kibana
Once you are done installing the packages, find and record your server’s private IP address using the ip address show
command:
- ip -brief address show
You will receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64
eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64
The private network interface in this output is the highlighted eth1
device, with the IPv4 address 10.137.0.5/16
. Your device name, and IP addresses will be different. However, the address will be from the following reserved blocks of addresses:
10.0.0.0
to 10.255.255.255
(10/8 prefix)172.16.0.0
to 172.31.255.255
(172.16/12 prefix)192.168.0.0
to 192.168.255.255
(192.168/16 prefix)If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)
Record the private IP address for your Elasticsearch server (in this case 10.137.0.5
). This address will be referred to as your_private_ip
in the remainder of this tutorial. Also note the name of the network interface, in this case eth1
. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.
Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack
security module.
Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface. You will also need to configure your firewall rules to allow access to Elasticsearch on your private network interface.
Open the /etc/elasticsearch/elasticsearch.yml
file using nano
or your preferred editor:
- sudo nano /etc/elasticsearch/elasticsearch.yml
Find the commented out #network.host: 192.168.0.1
line between lines 50–60 and add a new line after it that configures the network.bind_host
setting, as highlighted below:
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
network.bind_host: ["127.0.0.1", "your_private_ip"]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
Substitute your private IP in place of the your_private_ip
address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.
Next, go to the end of the file using the nano
shortcut CTRL+v
until you reach the end.
Add the following highlighted lines to the end of the file:
. . .
discovery.type: single-node
xpack.security.enabled: true
The discovery.type
setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled
setting turns on some of the security features that are included with Elasticsearch.
Save and close the file when you are done editing it. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using the Uncomplicated Firewall (ufw
), run the following commands:
- sudo ufw allow in on eth1
- sudo ufw allow out on eth1
Substitute your private network interface in place of eth1
if it uses a different name.
Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack
security module.
Now that you have configured networking and the xpack
security settings for Elasticsearch, you need to start it for the changes to take effect.
Run the following systemctl
command to start Elasticsearch:
- sudo systemctl start elasticsearch.service
Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.
Now that you have enabled the xpack.security.enabled
setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin
directory that can automatically generate random passwords for these users.
Run the following command to cd
to the directory and then generate random passwords for all the default users:
- cd /usr/share/elasticsearch/bin
- sudo ./elasticsearch-setup-passwords auto
You will receive output like the following. When prompted to continue, press y
and then RETURN
or ENTER
:
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Changed password for user apm_system
PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
Changed password for user kibana_system
PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
Changed password for user kibana
PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
Changed password for user logstash_system
PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
Changed password for user beats_system
PASSWORD beats_system = 2p81hIdAzWKknhzA992m
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
Changed password for user elastic
PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system
user’s password in the next section of this tutorial, and the elastic
user’s password in the Configuring Filebeat step of this tutorial.
At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack
security module.
In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.
First you’ll enable Kibana’s xpack
security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.
xpack.security
in KibanaTo get started with xpack
security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.
You can generate the required encryption keys using the kibana-encryption-keys
utility that is included in the /usr/share/kibana/bin
directory. Run the following to cd
to the directory and then generate the keys:
- cd /usr/share/kibana/bin/
- sudo ./kibana-encryption-keys generate -q
The -q
flag suppresses the tool’s instructions so that you only receive output like the following:
Outputxpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Copy your output somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml
configuration file.
Open the file using nano
or your preferred editor:
- sudo nano /etc/kibana/kibana.yml
Go to the end of the file using the nano
shortcut CTRL+v
until you reach the end. Paste the three xpack
lines that you copied to the end of the file:
. . .
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Keep the file open and proceed to the next section where you will configure Kibana’s network settings.
To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost"
line in /etc/kibana/kibana.yml
. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
server.host: "your_private_ip"
Substitute your private IP in place of the your_private_ip
address.
Save and close the file when you are done editing it. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.
There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml
configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.
We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly
If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username
and elasticsearch.password
.
If you choose to edit the configuration file, skip the rest of the steps in this section.
To add a secret to the keystore using the kibana-keystore
utility, first cd
to the /usr/share/kibana/bin
directory. Next, run the following command to set the username for Kibana:
- sudo ./kibana-keystore add elasticsearch.username
You will receive a prompt like the following:
Enter value for elasticsearch.username: *************
Enter kibana_system
when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an *
asterisk character. Press ENTER
or RETURN
when you are done entering the username.
Now repeat the same command for the password. Be sure to copy the password for the kibana_system
user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl
.
Run the following command to set the password:
- sudo ./kibana-keystore add elasticsearch.password
When prompted, paste the password to avoid any transcription errors:
Enter value for elasticsearch.password: ********************
Now that you have configured networking and the xpack
security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.
Run the following systemctl
command to restart Kibana:
- sudo systemctl start kibana.service
Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.
Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.
To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add the Elastic source list to the sources.list.d
directory, where apt
will search for new sources:
- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Now update the server’s package index and install the Filebeat package:
- sudo apt update
- sudo apt install filebeat
Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml
configuration file using nano
or your preferred editor:
- sudo nano /etc/filebeat/filebeat.yml
Find the Kibana
section of the file around line 100. Add a line after the commented out #host: "localhost:5601"
line that points to your Kibana instance’s private IP address and port:
. . .
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
host: "your_private_ip:5601"
. . .
This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.
Next, find the Elasticsearch Output
section of the file around line 130 and edit the hosts
, username
, and password
settings to match the values for your Elasticsearch server:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["your_private_ip:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "6kNbsxQGYZ2EQJiqJpgl"
. . .
Substitute in your Elasticsearch server’s private IP address on the hosts
line in place of the your_private_ip
value. Uncomment the username
field and leave it set to the elastic
user. Change the password
field from changeme
to the password for the elastic
user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.
Save and close the file when you are done editing it. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Next, enable Filebeats’ built-in Suricata module with the following command:
- sudo filebeat modules enable suricata
Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.
Run the filebeat setup
command. It may take a few minutes to load everything:
- sudo filebeat setup
Once the command finishes you should receive output like the following:
OutputOverwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possible to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
Loaded machine learning job configurations
Loaded Ingest pipelines
If there are no errors, use the systemctl
command to start Filebeat. It will begin sending events from Suricata’s eve.json
log to Elasticsearch once it is running.
- sudo systemctl start filebeat.service
Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.
Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.
SSH has an option -L
that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.
On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.
Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:
- ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N
The various arguments to SSH are:
-L
flag forwards traffic to your local system on port 5601
to the remote server.your_private_ip:5601
portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip
203.11.0.5
address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.-N
flag instructs SSH to not run a command like an interactive /bin/bash
shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.If you would like to close the tunnel at any time, press CTRL+C
.
On Windows your terminal should resemble the following screenshot:
Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER
or RETURN
.
On macOS and Linux your terminal will be similar to the following screenshot:
Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:
If your browser cannot connect to Kibana you will receive a message like the following in your terminal:
Outputchannel 3: open failed: connect failed: No route to host
This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.
Log in to your Kibana server using elastic
for the Username, and the password that you copied earlier in this tutorial for the user.
Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.
In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata
. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:
Click the [Filebeat Suricata] Events Overview
result to visit the Kibana dashboard that shows an overview of all logged Suricata events:
To visit the Suricata Alerts dashboard, repeat the search or click the Alerts
link that is included in the Events dashboard. Your page should resemble the following screenshot:
If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.
Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:
You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.
In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack
security module that is included with each tool.
After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.
Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.
The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.
]]>The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.
In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and CentOS 8 Stream. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.
The components that you will use to build your own SIEM are:
eve.json
log file and send each event to Elasticsearch for processing.First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json
logs to Elasticsearch.
Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.
If you have been following this tutorial series then you should already have Suricata running on a CentOS 8 Stream server. This server will be referred to as your Suricata server.
You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a CentOS 8 Stream server with:
For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.
The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:
- sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next, create an elasticsearch.repo
file in your /etc/yum/yum.repos.d
directory with the following contents, using vi
or your preferred editor. This ensures that the upstream Elasticsearch repositories will be used when installing new packages via yum
:
- sudo vi /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
If you are using vi
, when you are finished making changes, press ESC
and then :x
to write the changes to the file and quit.
Now install Elasticsearch and Kibana using the dnf
command. Press Y
to accept any prompts about GPG key fingerprints:
- sudo dnf install --enablerepo=elasticsearch elasticsearch kibana
The --enablerepo
option is used to override the default disabled setting in the /etc/yum.repos.d/elasticsearch.repo
file. This approach ensures that the Elasticsearch and Kibana packages do not get accidentally upgraded when you install other package updates to your server.
Once you are done installing the packages, find and record your server’s private IP address using the ip address show
command:
- ip -brief address show
You will receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64
eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64
The private network interface in this output is the highlighted eth1
device, with the IPv4 address 10.137.0.5
. Your device name, and IP addresses will be different. Regardless of your device name and private IP address, the address will be from the following reserved blocks:
10.0.0.0
to 10.255.255.255
(10/8 prefix)172.16.0.0
to 172.31.255.255
(172.16/12 prefix)192.168.0.0
to 192.168.255.255
(192.168/16 prefix)If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)
Record the private IP address for your Elasticsearch server (in this case 10.137.0.5
). This address will be referred to as your_private_ip
in the remainder of this tutorial. Also note the name of the network interface, in this case eth1
. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.
Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack
security module.
Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface.
Open the /etc/elasticsearch/elasticsearch.yml
file using vi
or your preferred editor:
- sudo vi /etc/elasticsearch/elasticsearch.yml
Find the commented out #network.host: 192.168.0.1
line between lines 50–60 and add a new line after it that configures the network.bind_host
setting, as highlighted below:
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
network.bind_host: ["127.0.0.1", "your_private_ip"]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
Substitute your private IP in place of the your_private_ip
address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.
Next, go to the end of the file using the vi
shortcut SHIFT+G
.
Add the following highlighted lines to the end of the file:
. . .
discovery.type: single-node
xpack.security.enabled: true
The discovery.type
setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled
setting turns on some of the security features that are included with Elasticsearch.
Save and close the file when you are done editing it.
Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using firewalld
, run the following commands:
- sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
- sudo firewall-cmd --permanent --zone=internal --add-service=elasticsearch
- sudo firewall-cmd --permanent --zone=internal --add-service=kibana
- sudo systemctl reload firewalld.service
Substitute your private network interface name in place of eth1
in the first command if yours is different. That command changes the interface rules to use the internal
Firewalld zone, which is more permissive than the default public
zone.
The next commands add rules to allow Elasticsearch traffic on port 9200 and 9300, along with Kibana traffic on port 5601.
The final command reloads the Firewalld service with the new permanent rules in place.
Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack
security module.
Now that you have configured networking and the xpack
security settings for Elasticsearch, you need to start it for the changes to take effect.
Run the following systemctl
command to start Elasticsearch:
- sudo systemctl start elasticsearch.service
Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.
Now that you have enabled the xpack.security.enabled
setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin
directory that can automatically generate random passwords for these users.
Run the following command to cd
to the directory and then generate random passwords for all the default users:
- cd /usr/share/elasticsearch/bin
- sudo ./elasticsearch-setup-passwords auto
You will receive output like the following. When prompted to continue, press y
and then RETURN
or ENTER
:
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Changed password for user apm_system
PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
Changed password for user kibana_system
PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
Changed password for user kibana
PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
Changed password for user logstash_system
PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
Changed password for user beats_system
PASSWORD beats_system = 2p81hIdAzWKknhzA992m
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
Changed password for user elastic
PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system
user’s password in the next section of this tutorial, and the elastic
user’s password in the Configuring Filebeat step of this tutorial.
At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack
security module.
In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.
First you’ll enable Kibana’s xpack
security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.
xpack.security
in KibanaTo get started with xpack
security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.
You can generate the required encryption keys using the kibana-encryption-keys
utility that is included in the /usr/share/kibana/bin
directory. Run the following to cd
to the directory and then generate the keys:
- cd /usr/share/kibana/bin/
- sudo ./kibana-encryption-keys generate -q --force
The -q
flag suppresses the tool’s instructions, and the --force
flag will ensure that you create new keys. You should receive output like the following:
Outputxpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Copy these three keys somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml
configuration file.
Open the file using vi
or your preferred editor:
- sudo vi /etc/kibana/kibana.yml
Go to the end of the file using the vi
shortcut SHIFT+G
. Paste the three xpack
lines that you copied to the end of the file:
. . .
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Keep the file open and proceed to the next section where you will configure Kibana’s network settings.
To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost"
line in /etc/kibana/kibana.yml
. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
server.host: "your_private_ip"
Substitute your private IP in place of the your_private_ip
address.
Save and close the file when you are done editing it. Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.
There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml
configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.
We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly.
If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username
and elasticsearch.password
.
If you choose to edit the configuration file, skip the rest of the steps in this section.
To add a secret to the keystore using the kibana-keystore
utility, first cd
to the the /usr/share/kibana/bin
directory. Next, run the following command to set the username for Kibana:
- cd /usr/share/kibana/bin
- sudo ./kibana-keystore add elasticsearch.username
You will receive a prompt like the following:
Enter value for elasticsearch.username: *************
Enter kibana_system
when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an *
asterisk character. Press ENTER
or RETURN
when you are done entering the username.
Now repeat the process, this time to save the password. Be sure to copy the password for the kibana_system
user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl
.
Run the following command to set the password:
- sudo ./kibana-keystore add elasticsearch.password
When prompted, paste the password to avoid any transcription errors:
Enter value for elasticsearch.password: ********************
Now that you have configured networking and the xpack
security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.
Run the following systemctl
command to restart Kibana:
- sudo systemctl start kibana.service
Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.
Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.
To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:
- sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next, create an elasticsearch.repo
file in your /etc/yum/yum.repos.d
directory with the following contents, using vi
or your preferred editor:
- sudo vi /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
When you are finished making changes save and exit the file. Now install the Filebeat package using the dnf
command:
- sudo dnf install --enablerepo=elasticsearch filebeat
Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml
configuration file using vi
or your preferred editor:
- sudo vi /etc/filebeat/filebeat.yml
Find the Kibana
section of the file around line 100. Add a line after the commented out #host: "localhost:5601"
line that points to your Kibana instance’s private IP address and port:
. . .
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
host: "your_private_ip:5601"
. . .
This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.
Next, find the Elasticsearch Output
section of the file around line 130 and edit the hosts
, username
, and password
settings to match the values for your Elasticsearch server:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["your_private_ip:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "6kNbsxQGYZ2EQJiqJpgl"
. . .
Substitute in your Elasticsearch server’s private IP address on the hosts
line. Uncomment the username
field and leave it set to the elastic
user. Change the password
field from changeme
to the password for the elastic
user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.
Save and close the file when you are done editing it. Next, enable Filebeats’ built-in Suricata module with the following command:
- sudo filebeat modules enable suricata
Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.
Run the filebeat setup
command. It may take a few minutes to load everything:
- sudo filebeat setup
Once the command finishes you should receive output like the following:
OutputOverwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
Loaded machine learning job configurations
Loaded Ingest pipelines
If there are no errors, use the systemctl
command to start Filebeat. It will begin sending events from Suricata’s eve.json
log to Elasticsearch once it is running.
- sudo systemctl start filebeat.service
Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.
Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.
SSH has an option -L
that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.
On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.
Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:
- ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N
The various arguments to SSH are:
-L
flag forwards traffic to your local system on port 5601
to the remote server.your_private_ip:5601
portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip
.203.0.113.5
address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.-N
flag instructs SSH to not run a command like an interactive /bin/bash
shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.If you would like to close the tunnel at any time, press CTRL+C
.
On Windows your terminal should resemble the following screenshot:
Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER
or RETURN
.
On macOS and Linux your terminal will be similar to the following screenshot:
Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:
If your browser cannot connect to Kibana you will receive a message like the following in your terminal:
Outputchannel 3: open failed: connect failed: No route to host
This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.
Log in to your Kibana server using elastic
for the Username, and the password that you copied earlier in this tutorial for the user.
Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.
In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata
. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:
Click the [Filebeat Suricata] Events Overview
result to visit the Kibana dashboard that shows an overview of all logged Suricata events:
To visit the Suricata Alerts dashboard, repeat the search or click the Alerts
link that is included in the Events dashboard. Your page should resemble the following screenshot:
If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.
Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:
You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.
In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack
security module that is included with each tool.
After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.
Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.
The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.
]]>Suricata is a Network Security Monitoring (NSM) tool that uses sets of community created and user defined signatures (also referred to as rules) to examine and process network traffic. Suricata can generate log events, trigger alerts, and drop traffic when it detects suspicious packets or requests to any number of different services running on a server.
By default Suricata works as a passive Intrusion Detection System (IDS) to scan for suspicious traffic on a server or network. It will generate and log alerts for further investigation. It can also be configured as an active Intrusion Prevention System (IPS) to log, alert, and completely block network traffic that matches specific rules.
You can deploy Suricata on a gateway host in a network to scan all incoming and outgoing network traffic from other systems, or you can run it locally on individual machines in either mode.
In this tutorial you will learn how to install Suricata, and how to customize some of its default settings on Centos 8 Stream to suit your needs. You will also learn how to download existing sets of signatures (usually referred to as rulesets) that Suricata uses to scan network traffic. Finally you’ll learn how to test whether Suricata is working correctly when it detects suspicious requests and data in a response.
Depending on your network configuration and how you intend to use Suricata, you may need more or less CPU and RAM for your server. Generally, the more traffic you plan to inspect the more resources you should allocate to Suricata. In a production environment plan to use at least 2 CPUs and 4 or 8GB of RAM to start with. From there you can scale up resources according to Suricata’s performance and the amount of traffic that you need to process.
If you plan to use Suricata to protect the server that it is running on, you will need:
Otherwise, if you plan to use Suricata on a gateway host to monitor and protect multiple servers, you will need to ensure that the host’s networking is configured correctly.
If you are using DigitalOcean you can follow this guide on How to Configure a Droplet as a VPC Gateway. Those instructions should work for most CentOS, Fedora, and other RedHat derived servers as well.
To get started installing Suricata, you will need to add the Open Information Security Foundation’s (OISF) software repository information to your CentOS system. You can use the dnf copr enable
command to do this. You will also need to add the Extra Packages for Enterprise Linux (EPEL) repository.
To enable the Community Projects (copr
) subcommand for the dnf
package tool, run the following:
- sudo dnf install 'dnf-command(copr)'
You will be prompted to install some additional dependencies, as well as accept the GPG key for the CentOS Linux distribution. Press y
and ENTER
each time to finish installing the copr
package.
Next run the following command to add the OISF repository to your system and update the list of available packages:
- sudo dnf copr enable @oisf/suricata-6.0
Press y
and ENTER
when you are prompted to confirm that you want to add the repository.
Now add the epel-release
package, which will make some extra dependency packages available for Suricata:
- sudo dnf install epel-release
When you are prompted to import the GPG key, press y
and ENTER
to accept.
Now that you have the required software repositories enabled, you can install the suricata
package using the dnf
command:
- sudo dnf install suricata
When you are prompted to add the GPG key for the OISF repository, press y
and ENTER
. The package and its dependencies will now be downloaded and installed.
Next, enable the suricata.service
so that it will run when your system restarts. Use the systemctl
command to enable it:
- sudo systemctl enable suricata.service
You should receive output like the following indicating the service is enabled:
OutputCreated symlink /etc/systemd/system/multi-user.target.wants/suricata.service → /usr/lib/systemd/system/suricata.service.
Before moving on to the next section of this tutorial, which explains how to configure Suricata, stop the service using systemctl
:
- sudo systemctl stop suricata.service
Stopping Suricata ensures that when you edit and test the configuration file, any changes that you make will be validated and loaded when Suricata starts up again.
The Suricata package from the OISF repositories ships with a configuration file that covers a wide variety of use cases. The default mode for Suricata is IDS mode, so no traffic will be dropped, only logged. Leaving this mode set to the default is a good idea as you learn Suricata. Once you have Suricata configured and integrated into your environment, and have a good idea of the kinds of traffic that it will alert you about, you can opt to turn on IPS mode.
However, the default configuration still has a few settings that you may need to change depending on your environment and needs.
Suricata can include a Community ID field in its JSON output to make it easier to match individual event records to records in datasets generated by other tools.
If you plan to use Suricata with other tools like Zeek or Elasticsearch, adding the Community ID now is a good idea.
To enable the option, open /etc/suricata/suricata.yaml
using vi
or your preferred editor:
- sudo vi /etc/suricata/suricata.yaml
Find line 120 which reads # Community Flow ID
. If you are using vi
type 120gg
to go directly to the line. Below that line is the community-id
key. Set it to true
to enable the setting:
. . .
# Community Flow ID
# Adds a 'community_id' field to EVE records. These are meant to give
# records a predictable flow ID that can be used to match records to
# output of other tools such as Zeek (Bro).
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
# enable/disable the community id feature.
community-id: true
. . .
Now when you examine events, they will have an ID like 1:S+3BA2UmrHK0Pk+u3XH78GAFTtQ=
that you can use to correlate records across different NMS tools.
Save and close the /etc/suricata/suricata.yaml
file. If you are using vi
, you can do so with ESC
and then :x
then ENTER
to save and exit the file.
You may need to override the default network interface or interfaces that you would like Suricata to inspect traffic on. The configuration file that comes with the OISF Suricata package defaults to inspecting traffic on a device called eth0
. If your system uses a different default network interface, or if you would like to inspect traffic on more than one interface, then you will need to change this value.
To determine the device name of your default network interface, you can use the ip
command as follows:
- ip -p -j route show default
The -p
flag formats the output to be more readable, and the -j
flag prints the output as JSON.
You should receive output like the following:
Output[ {
"dst": "default",
"gateway": "203.0.113.254",
"dev": "eth0",
"protocol": "static",
"metric": 100,
"flags": [ ]
} ]
The dev
line indicates the default device. In this example output, the device is the highlighted eth0
interface. Your output may show a device name like ens...
or eno...
. Whatever the name is, make a note of it.
Now you can edit Suricata’s configuration and verify or change the interface name. Open the /etc/suricata/suricata.yaml
configuration file using vi
or your preferred editor:
- sudo vi /etc/suricata/suricata.yaml
Scroll through the file until you come to a line that reads af-packet:
around line 580. If you are using vi
you can also go to the line directly by entering 580gg
. Below that line is the default interface that Suricata will use to inspect traffic. Edit the line to match your interface like the highlighted example that follows:
# Linux high speed capture support
af-packet:
- interface: eth0
# Number of receive threads. "auto" uses the number of cores
#threads: auto
# Default clusterid. AF_PACKET will load balance packets based on flow.
cluster-id: 99
. . .
If you want to inspect traffic on additional interfaces, you can add more - interface: eth...
YAML objects. For example, to add a device named enp0s1
, scroll down to the bottom of the af-packet
section to around line 650. To add a new interface, insert it before the - interface: default
section like the following highlighted example:
# For eBPF and XDP setup including bypass, filter and load balancing, please
# see doc/userguide/capture-hardware/ebpf-xdp.rst for more info.
- interface: enp0s1
cluster-id: 98
- interface: default
#threads: auto
#use-mmap: no
#tpacket-v3: yes
Be sure to choose a unique cluster-id
value for each - interface
object.
Keep your editor open and proceed to the next section where you will configure live rule reloading. If you do not want to enable that setting then you can save and close the /etc/suricata/suricata.yaml
file. If you are using vi
, you can do so with ESC
, then :x
and ENTER
to save and quit.
Suricata supports live rule reloading, which means you can add, remove, and edit rules without needing to restart the running Suricata process. To enable the live reload option, scroll to the bottom of the configuration file and add the following lines:
. . .
detect-engine:
- rule-reload: true
With this setting in place, you will be able to send the SIGUSR2
system signal to the running process, and Suricata will reload any changed rules into memory.
A command like the following will notify the Suricata process to reload its rulesets, without restarting the process:
- sudo kill -usr2 $(pidof suricata)
The $(pidof suricata)
portion of the command invokes a subshell, and finds the process ID of the running Suricata daemon. The beginning sudo kill -usr2
part of the command uses the kill
utility to send the SIGUSR2
signal to the process ID that is reported back by the subshell.
You can use this command any time you run suricata-update
or when you add or edit your own custom rules.
Save and close the /etc/suricata/suricata.yaml
file. If you are using vi
, you can do so with ESC
, then :x
and ENTER
to confirm.
At this point in the tutorial, if you were to start Suricata, you would receive a warning message like the following in the logs that there are no loaded rules:
Output<Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules
By default the Suricata package includes a limited set of detection rules (in the /etc/suricata/rules
directory), so turning Suricata on at this point would only detect a limited amount of bad traffic.
Suricata includes a tool called suricata-update
that can fetch rulesets from external providers. Run it as follows to download an up to date ruleset for your Suricata server:
- sudo suricata-update
You should receive output like the following:
Output19/10/2021 -- 19:31:03 - <Info> -- Using data-directory /var/lib/suricata.
19/10/2021 -- 19:31:03 - <Info> -- Using Suricata configuration /etc/suricata/suricata.yaml
19/10/2021 -- 19:31:03 - <Info> -- Using /usr/share/suricata/rules for Suricata provided rules.
. . .
19/10/2021 -- 19:31:03 - <Info> -- No sources configured, will use Emerging Threats Open
19/10/2021 -- 19:31:03 - <Info> -- Fetching https://rules.emergingthreats.net/open/suricata-6.0.3/emerging.rules.tar.gz.
100% - 3062850/3062850
. . .
19/10/2021 -- 19:31:06 - <Info> -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 31011; enabled: 23649; added: 31011; removed 0; modified: 0
19/10/2021 -- 19:31:07 - <Info> -- Writing /var/lib/suricata/rules/classification.config
19/10/2021 -- 19:31:07 - <Info> -- Testing with suricata -T.
19/10/2021 -- 19:31:32 - <Info> -- Done.
The highlighted lines indicate suricata-update
has fetched the free Emerging Threats ET Open Rules, and saved them to Suricata’s /var/lib/suricata/rules/suricata.rules
file. It also indicates the number of rules that were processed, in this example, 31011 were added and of those 23649 were enabled.
The suricata-update
tool can fetch rules from a variety of free and commercial ruleset providers. Some rulesets like the ET Open set that you already added are available for free, while others require a paid subscription.
You can list the default set of rule providers using the list-sources
flag to suricata-update
like this:
- sudo suricata-update list-sources
You will receive a list of sources like the following:
Output. . .
19/10/2021 -- 19:27:34 - <Info> -- Adding all sources
19/10/2021 -- 19:27:34 - <Info> -- Saved /var/lib/suricata/update/cache/index.yaml
Name: et/open
Vendor: Proofpoint
Summary: Emerging Threats Open Ruleset
License: MIT
. . .
For example, if you wanted to include the tgreen/hunting
ruleset, you could enable it using the following command:
- sudo suricata-update enable-source tgreen/hunting
Then run suricata-update
again and the new set of rules will be added, in addition to the existing ET Open rules and any others that you have downloaded.
Now that you have edited Suricata’s configuration file to include the optional Community ID, specify the default network interface, and enabled live rule reloading, it is a good idea to test the configuration.
Suricata has a built-in test mode that will check the configuration file and any included rules for validity. Validate your changes from the previous section using the -T
flag to run Suricata in test mode. The -v
flag will print some additional information, and the -c
flag tells Suricata where to find its configuration file:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on the amount of CPU you have allocated to Suricata and the number of rules that you have added, so be prepared to wait for a minute or two for it to complete.
With the default ET Open ruleset you should receive output like the following:
Output21/10/2021 -- 15:00:40 - <Info> - Running suricata under test mode
21/10/2021 -- 15:00:40 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:00:40 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:00:40 - <Info> - fast output device (regular) initialized: fast.log
21/10/2021 -- 15:00:40 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:00:40 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:00:46 - <Info> - 1 rule files processed. 23879 rules successfully loaded, 0 rules failed
21/10/2021 -- 15:00:46 - <Info> - Threshold config parsed: 0 rule(s) found
21/10/2021 -- 15:00:47 - <Info> - 23882 signatures processed. 1183 are IP-only rules, 4043 are inspecting packet payload, 18453 inspect application layer, 107 are decoder event only
21/10/2021 -- 15:01:13 - <Notice> - Configuration provided was successfully loaded. Exiting.
21/10/2021 -- 15:01:13 - <Info> - cleaning up signature grouping structure... complete
If there is an error in your configuration file, then the test mode will generate a specific error code and message that you can use to help troubleshoot. For example, including a rules file that does not exist called test.rules
would generate an error like the following:
Output21/10/2021 -- 15:10:15 - <Info> - Running suricata under test mode
21/10/2021 -- 15:10:15 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:10:15 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:10:15 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:10:15 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:10:21 - <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/test.rules
With that error you could then edit your configuration file to include the correct path, or fix invalid variables and configuration options.
Once your Suricata test mode run completes successfully you can move to the next step, which is starting Suricata in daemon mode.
Now that you have a valid Suricata configuration and ruleset, you can start the Suricata server. Run the following systemctl
command:
- sudo systemctl start suricata.service
You can examine the status of the service using the systemctl status
command:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - Suricata Intrusion Detection Service
Loaded: loaded (/usr/lib/systemd/system/suricata.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2021-10-21 18:22:56 UTC; 1min 57s ago
Docs: man:suricata(1)
Process: 24588 ExecStartPre=/bin/rm -f /var/run/suricata.pid (code=exited, status=0/SUCCESS)
Main PID: 24590 (Suricata-Main)
Tasks: 1 (limit: 23473)
Memory: 80.2M
CGroup: /system.slice/suricata.service
└─24590 /sbin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -i eth0 --user suricata
Oct 21 18:22:56 suricata systemd[1]: Starting Suricata Intrusion Detection Service..
Oct 21 18:22:56 suricata systemd[1]: Started Suricata Intrusion Detection Service.
. . .
As with the test mode command, it will take Suricata a minute or two to load and parse all of the rules. You can use the tail
command to watch for a specific message in Suricata’s logs that indicates it has finished starting:
- sudo tail -f /var/log/suricata/suricata.log
You will receive a number of lines of output, and the terminal may appear to be stuck while Suricata loads. Continue waiting for output until you receive a line like the following:
Output19/10/2021 -- 19:22:39 - <Info> - All AFP capture threads are running.
This line indicates Suricata is running and ready to inspect traffic. You can exit the tail
command using CTRL+C
.
Now that you have verified that Suricata is running, the next step in this tutorial is to check whether Suricata detects a request to a test URL that is designed to generate an alert.
The ET Open ruleset that you downloaded contains over 30000 rules. A full explanation of how Suricata rules work, and how to construct them is beyond the scope of this introductory tutorial. A subsequent tutorial in this series will explain how rules work and how to build your own.
For the purposes of this tutorial, testing whether Suricata is detecting suspicious traffic with the configuration that you generated is sufficient. The Suricata Quickstart recommends testing the ET Open rule with number 2100498
using the curl
command.
Run the following to generate an HTTP request, which will return a response that matches Suricata’s alert rule:
- curl http://testmynids.org/uid/index.html
The curl
command will output a response like the following:
Outputuid=0(root) gid=0(root) groups=0(root)
This example response data is designed to trigger an alert, by pretending to return the output of a command like id
that might run on a compromised remote system via a web shell.
Now you can check Suricata’s logs for a corresponding alert. There are two logs that are enabled with the default Suricata configuration. The first is in /var/log/suricata/fast.log
and the second is a machine readable log in /var/log/suricata/eve.log
.
/var/log/suricata/fast.log
To check for a log entry in /var/log/suricata/fast.log
that corresponds to your curl
request use the grep
command. Using the 2100498
rule identifier from the Quickstart documentation, search for entries that match it using the following command:
- grep 2100498 /var/log/suricata/fast.log
If your request used IPv6, then you should receive output like the following, where 2001:DB8::1
is your system’s public IPv6 address:
Output10/21/2021-18:35:54.950106 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 2600:9000:2000:4400:0018:30b3:e400:93a1:80 -> 2001:DB8::1:34628
If your request used IPv4, then your log should have a message like this, where 203.0.113.1
is your system’s public IPv4 address:
Output10/21/2021-18:35:57.247239 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 204.246.178.81:80 -> 203.0.113.1:36364
Note the highlighted 2100498
value in the output, which is the Signature ID (sid
) that Suricata uses to identify a rule.
/var/log/suricata/eve.log
Suricata also logs events to /var/log/suricata/eve.log
(nicknamed the EVE log) using JSON to format entries.
The Suricata documentation recommends using the jq
utility to read and filter the entries in this file. Install jq
if you do not have it on your system using the following dnf
command:
- sudo dnf install jq
Once you have jq
installed, you can filter the events in the EVE log by searching for the 2100498
signature with the following command:
- jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
The command examines each JSON entry and prints any that have an alert
object, with a signature_id
key that matches the 2100498
value that you are searching for. The output will resemble the following:
Output{
"timestamp": "2021-10-21T19:42:47.368856+0000",
"flow_id": 775889108832281,
"in_iface": "eth0",
"event_type": "alert",
"src_ip": "203.0.113.1",
"src_port": 80,
"dest_ip": "147.182.148.159",
"dest_port": 38920,
"proto": "TCP",
"community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=",
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
. . .
}
Note the highlighted "signature_id": 2100498,
line, which is the key that jq
is searching for. Also note the highlighted "community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=",
line in the JSON output. This key is the generated Community Flow Identifier that you enabled in Suricata’s configuration file.
Each alert will generate a unique Community Flow Identifier. Other NMS tools can also generate the same identifier to enable cross-referencing a Suricata alert with output from other tools.
A matching log entry in either log file means that Suricata successfully inspected the network traffic, matched it against a detection rule, and generated an alert for subsequent analysis or logging. A future tutorial in this series will explore how to send Suricata alerts to a Security Information Event Management (SIEM) system for further processing.
Once you have alerts set up and tested, you can choose how you want to handle them. For some use cases, logging alerts for auditing purposes may be sufficient; or you may prefer to take a more active approach to blocking traffic from systems that generate repeated alerts.
If you would like to block traffic based on the alerts that Suricata generates, one approach is to use entries from the EVE log and then add firewall rules to restrict access to your system or systems. You can use the jq
tool to extract specific fields from an alert, and then add UFW or IPtables rules to block requests.
Again, this example is a hypothetical scenario using deliberately crafted request and response data. Your knowledge of the systems and protocols that your environment should be able to access is essential in order to determine which traffic is legitimate and which can be blocked.
In this tutorial you installed Suricata from the OISF software repositories. Installing Suricata this way ensures that you can receive updates whenever a new version of Suricata is released. After installing Suricata you edited the default configuration to add a Community Flow ID for use with other security tools. You also enabled live rule reloading, and downloaded an initial set of rules.
Once you validated Suricata’s configuration, you started the process and generated some test HTTP traffic. You verified that Suricata could detect suspicious traffic by examining both of the default logs to make sure they contained an alert corresponding to the rule you were testing.
For more information about Suricata, visit the official Suricata Site. For more details on any of the configuration options that you configured in this tutorial, refer to the Suricata User Guide.
Now that you have Suricata installed and configured, you can continue to the next tutorial in this series Understanding Suricata Signatures where you’ll explore how to write your own custom Suricata rules. You’ll learn about different ways to create alerts, or even how to drop traffic entirely, based on criteria like invalid TCP/IP packets, the contents of DNS queries, HTTP requests and responses, and even TLS handshakes.
]]>The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.
In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and Ubuntu 20.04. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.
The components that you will use to build your own SIEM tool are:
eve.json
log file and send each event to Elasticsearch for processing.First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json
logs to Elasticsearch.
Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.
If you have been following this tutorial series then you should already have Suricata running on an Ubuntu 20.04 server. This server will be referred to as your Suricata server.
You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be an Ubuntu 20.04 server with:
For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.
The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add the Elastic source list to the sources.list.d
directory, where apt
will search for new sources:
- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Now update your server’s package index and install Elasticsearch and Kibana:
- sudo apt update
- sudo apt install elasticsearch kibana
Once you are done installing the packages, find and record your server’s private IP address using the ip address show
command:
- ip -brief address show
You will receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64
eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64
The private network interface in this output is the highlighted eth1
device, with the IPv4 address 10.137.0.5/16
. Your device name, and IP addresses will be different. However, the address will be from the following reserved blocks of addresses:
10.0.0.0
to 10.255.255.255
(10/8 prefix)172.16.0.0
to 172.31.255.255
(172.16/12 prefix)192.168.0.0
to 192.168.255.255
(192.168/16 prefix)If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)
Record the private IP address for your Elasticsearch server (in this case 10.137.0.5
). This address will be referred to as your_private_ip
in the remainder of this tutorial. Also note the name of the network interface, in this case eth1
. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.
Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack
security module.
Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface. You will also need to configure your firewall rules to allow access to Elasticsearch on your private network interface.
Open the /etc/elasticsearch/elasticsearch.yml
file using nano
or your preferred editor:
- sudo nano /etc/elasticsearch/elasticsearch.yml
Find the commented out #network.host: 192.168.0.1
line between lines 50–60 and add a new line after it that configures the network.bind_host
setting, as highlighted below:
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
network.bind_host: ["127.0.0.1", "your_private_ip"]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
Substitute your private IP in place of the your_private_ip
address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.
Next, go to the end of the file using the nano
shortcut CTRL+v
until you reach the end.
Add the following highlighted lines to the end of the file:
. . .
discovery.type: single-node
xpack.security.enabled: true
The discovery.type
setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled
setting turns on some of the security features that are included with Elasticsearch.
Save and close the file when you are done editing it. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using the Uncomplicated Firewall (ufw
), run the following commands:
- sudo ufw allow in on eth1
- sudo ufw allow out on eth1
Substitute your private network interface in place of eth1
if it uses a different name.
Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack
security module.
Now that you have configured networking and the xpack
security settings for Elasticsearch, you need to start it for the changes to take effect.
Run the following systemctl
command to start Elasticsearch:
- sudo systemctl start elasticsearch.service
Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.
Now that you have enabled the xpack.security.enabled
setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin
directory that can automatically generate random passwords for these users.
Run the following command to cd
to the directory and then generate random passwords for all the default users:
- cd /usr/share/elasticsearch/bin
- sudo ./elasticsearch-setup-passwords auto
You will receive output like the following. When prompted to continue, press y
and then RETURN
or ENTER
:
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Changed password for user apm_system
PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
Changed password for user kibana_system
PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
Changed password for user kibana
PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
Changed password for user logstash_system
PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
Changed password for user beats_system
PASSWORD beats_system = 2p81hIdAzWKknhzA992m
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
Changed password for user elastic
PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system
user’s password in the next section of this tutorial, and the elastic
user’s password in the Configuring Filebeat step of this tutorial.
At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack
security module.
In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.
First you’ll enable Kibana’s xpack
security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.
xpack.security
in KibanaTo get started with xpack
security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.
You can generate the required encryption keys using the kibana-encryption-keys
utility that is included in the /usr/share/kibana/bin
directory. Run the following to cd
to the directory and then generate the keys:
- cd /usr/share/kibana/bin/
- sudo ./kibana-encryption-keys generate -q
The -q
flag suppresses the tool’s instructions so that you only receive output like the following:
Outputxpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Copy your output somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml
configuration file.
Open the file using nano
or your preferred editor:
- sudo nano /etc/kibana/kibana.yml
Go to the end of the file using the nano
shortcut CTRL+v
until you reach the end. Paste the three xpack
lines that you copied to the end of the file:
. . .
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
Keep the file open and proceed to the next section where you will configure Kibana’s network settings.
To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost"
line in /etc/kibana/kibana.yml
. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
server.host: "your_private_ip"
Substitute your private IP in place of the your_private_ip
address.
Save and close the file when you are done editing it. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.
There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml
configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.
We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly
If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username
and elasticsearch.password
.
If you choose to edit the configuration file, skip the rest of the steps in this section.
To add a secret to the keystore using the kibana-keystore
utility, first cd
to the /usr/share/kibana/bin
directory. Next, run the following command to set the username for Kibana:
- sudo ./kibana-keystore add elasticsearch.username
You will receive a prompt like the following:
Enter value for elasticsearch.username: *************
Enter kibana_system
when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an *
asterisk character. Press ENTER
or RETURN
when you are done entering the username.
Now repeat the same command for the password. Be sure to copy the password for the kibana_system
user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl
.
Run the following command to set the password:
- sudo ./kibana-keystore add elasticsearch.password
When prompted, paste the password to avoid any transcription errors:
Enter value for elasticsearch.password: ********************
Now that you have configured networking and the xpack
security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.
Run the following systemctl
command to restart Kibana:
- sudo systemctl start kibana.service
Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.
Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.
To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Next, add the Elastic source list to the sources.list.d
directory, where apt
will search for new sources:
- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Now update the server’s package index and install the Filebeat package:
- sudo apt update
- sudo apt install filebeat
Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml
configuration file using nano
or your preferred editor:
- sudo nano /etc/filebeat/filebeat.yml
Find the Kibana
section of the file around line 100. Add a line after the commented out #host: "localhost:5601"
line that points to your Kibana instance’s private IP address and port:
. . .
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
host: "your_private_ip:5601"
. . .
This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.
Next, find the Elasticsearch Output
section of the file around line 130 and edit the hosts
, username
, and password
settings to match the values for your Elasticsearch server:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["your_private_ip:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "6kNbsxQGYZ2EQJiqJpgl"
. . .
Substitute in your Elasticsearch server’s private IP address on the hosts
line in place of the your_private_ip
value. Uncomment the username
field and leave it set to the elastic
user. Change the password
field from changeme
to the password for the elastic
user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.
Save and close the file when you are done editing it. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Next, enable Filebeats’ built-in Suricata module with the following command:
- sudo filebeat modules enable suricata
Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.
Run the filebeat setup
command. It may take a few minutes to load everything:
- sudo filebeat setup
Once the command finishes you should receive output like the following:
OutputOverwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possible to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
Loaded machine learning job configurations
Loaded Ingest pipelines
If there are no errors, use the systemctl
command to start Filebeat. It will begin sending events from Suricata’s eve.json
log to Elasticsearch once it is running.
- sudo systemctl start filebeat.service
Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.
Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.
SSH has an option -L
that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.
On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.
Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:
- ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N
The various arguments to SSH are:
-L
flag forwards traffic to your local system on port 5601
to the remote server.your_private_ip:5601
portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip
203.11.0.5
address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.-N
flag instructs SSH to not run a command like an interactive /bin/bash
shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.If you would like to close the tunnel at any time, press CTRL+C
.
On Windows your terminal should resemble the following screenshot:
Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER
or RETURN
.
On macOS and Linux your terminal will be similar to the following screenshot:
Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:
If your browser cannot connect to Kibana you will receive a message like the following in your terminal:
Outputchannel 3: open failed: connect failed: No route to host
This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.
Log in to your Kibana server using elastic
for the Username, and the password that you copied earlier in this tutorial for the user.
Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.
In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata
. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:
Click the [Filebeat Suricata] Events Overview
result to visit the Kibana dashboard that shows an overview of all logged Suricata events:
To visit the Suricata Alerts dashboard, repeat the search or click the Alerts
link that is included in the Events dashboard. Your page should resemble the following screenshot:
If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.
Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:
You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.
In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack
security module that is included with each tool.
After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.
Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.
The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.
]]>In this tutorial you will learn how to configure Suricata’s built-in Intrusion Prevention System (IPS) mode on Rocky Linux 8. By default Suricata is configured to run as an Intrusion Detection System (IDS), which only generates alerts and logs suspicious traffic. When you enable IPS mode, Suricata can actively drop suspicious network traffic in addition to generating alerts for further analysis.
Before enabling IPS mode, it is important to check which signatures you have enabled, and their default actions. An incorrectly configured signature, or a signature that is overly broad may result in dropping legitimate traffic to your network, or even block you from accessing your servers over SSH and other management protocols.
In the first part of this tutorial you will check the signatures that you have installed and enabled. You will also learn how to include your own signatures. Once you know which signatures you would like to use in IPS mode, you’ll convert their default action to drop or reject traffic. With your signatures in place, you’ll learn how to send network traffic through Suricata using the netfilter NFQUEUE iptables target, and then generate some invalid network traffic to ensure that Suricata drops it as expected.
If you have been following this tutorial series then you should already have Suricata running on a Rocky Linux 8 server.
If you still need to install Suricata then you can follow How To Install Suricata on Rocky Linux 8
You should also have the ET Open Ruleset downloaded using the suricata-update
command, and included in your Suricata signatures.
The jq
command line JSON processing tool. If you do not have it installed from a previous tutorial, you can do so using the dnf
command:
- sudo dnf install jq
You may also have custom signatures that you would like to use from the previous Understanding Suricata Signatures tutorial.
The previous tutorials in this series explored how to install and configure Suricata, as well as how to understand signatures. If you would like to create and include your own rules then you need to edit Suricata’s /etc/suricata/suricata.yaml
file to include a custom path to your signatures.
First, let’s find your server’s public IPs so that you can use them in your custom signatures. To find your IPs you can use the ip
command:
- ip -brief address show
You should receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 203.0.113.5/20 10.20.0.5/16 2001:DB8::1/32 fe80::94ad:d4ff:fef9:cee0/64
eth1 UP 10.137.0.2/16 fe80::44a2:ebff:fe91:5187/64
Your public IP address(es) will be similar to the highlighted 203.0.113.5
and 2001:DB8::1/32
IPs in the output.
Now let’s create the following custom signature to scan for SSH traffic to non-SSH ports and include it in a file called /var/lib/suricata/rules/local.rules
. Open the file with nano
or your preferred editor:
- sudo vi /var/lib/suricata/rules/local.rules
Copy and paste the following signature:
alert ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000000;)
alert ssh any any -> 2001:DB8::1/32 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000001;)
Substitute your server’s public IP address in place of the 203.0.113.5
and 2001:DB8::1/32
addresses in the rule. If you are not using IPv6 then you can skip adding that signature in this and the following rules.
You can continue adding custom signatures to this local.rules
file depending on your network and applications. For example, if you wanted to alert about HTTP traffic to non-standard ports, you could use the following signatures:
alert http any any -> 203.0.113.5 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000002;)
alert http any any -> 2001:DB8::1/32 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000003;)
To add a signature that checks for TLS traffic to ports other than the default 443
for web servers, add the following:
alert tls any any -> 203.0.113.5 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000004;)
alert tls any any -> 2001:DB8::1/32 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000005;)
When you are done adding signatures, save and close the file. If you are using vi
, press ESC
and then :x
then ENTER
to save and exit.
Now that you have some custom signatures defined, edit Suricata’s /etc/suricata/suricata.yaml
configuration file using nano
or your preferred editor to include them:
- sudo vi /etc/suricata/suricata.yaml
Find the rule-files:
portion of the configuration. If you are using vi
enter 1879gg
to go to the line. The exact location in your file may be different, but you should be in the correct general region of the file.
Edit the section and add the following highlighted - local.rules
line:
. . .
rule-files:
- suricata.rules
- local.rules
. . .
Save and exit the file. Be sure to validate Suricata’s configuration after adding your rules. To do so run the following command:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on how many rules you have loaded in the default suricata.rules
file. If you find the test takes too long, you can comment out the - suricata.rules
line in the configuration by adding a #
to the beginning of the line and then run your configuration test again. Be sure to remove the #
comment if you plan to use the suricata.rules
signature in your final running configuration.
Once you are satisfied with the signatures that you have created or included using the suricata-update
tool, you can proceed to the next step, where you’ll switch the default action for your signatures from alert
or log
to actively dropping traffic.
Now that you have your custom signatures tested and working with Suricata, you can change the action to drop
or reject
. When Suricata is operating in IPS mode, these actions will actively block invalid traffic for any matching signature.
These two actions are described in the previous tutorial in this series, Understanding Suricata Signatures. The choice of which action to use is up to you. A drop
action will immediately discard a packet and any subsequent packets that belong to the network flow. A reject
action will send both the client and server a reset packet if the traffic is TCP-based, and an ICMP error packet for any other protocol.
Let’s use the custom rules from the previous section and convert them to use the drop
action, since the traffic that they match is likely to be a network scan, or some other invalid connection.
Open your /var/lib/suricata/rules/local.rules
file using nano
or your preferred editor and change the alert
action at the beginning of each line in the file to drop
:
- sudo vi /var/lib/suricata/rules/local.rules
drop ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000000;)
drop ssh any any -> 2001:DB8::1/32 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000001;)
. . .
Repeat the step above for any signatures in /var/lib/suricata/rules/suricata.rules
that you would like to convert to drop
or reject
mode.
Note: If you ran suricata-update
in the prerequisite tutorial, you may have more than 30,000 signatures included in your suricata.rules file
.
If you convert every signature to drop
or reject
you risk blocking legitimate access to your network or servers. Instead, leave the rules in suricata.rules
for the time being, and add your custom signatures to local.rules
. Suricata will continue to generate alerts for suspicious traffic that is described by the signatures in suricata.rules
while it is running in IPS mode.
After you have a few days or weeks of alerts collected, you can analyze them and choose the relevant signatures to convert to drop
or reject
based on their sid
.
Once you have all the signatures configured with the action that you would like them to take, the next step is to reconfigure and then restart Suricata in IPS mode.
nfqueue
ModeSuricata runs in IDS mode by default, which means it will not actively block network traffic. To switch to IPS mode, you’ll need to edit Suricata’s /etc/sysconfig/suricata
configuration file.
Open the file in nano
or your preferred editor:
- sudo vi /etc/sysconfig/suricata
Find the OPTIONS="-i eth0 --user suricata"
line and comment it out by adding a #
to the beginning of the line. Then add a new line OPTIONS="-q 0 -vvv --user suricata"
line that tells Suricata to run in IPS mode.
Your file should have the following highlighted lines in it when you are done editing:
. . .
# OPTIONS="-i eth0 --user suricata"
OPTIONS="-q 0 -vvv --user suricata"
. . .
Save and close the file. Now you can restart Suricata using systemctl
:
- sudo systemctl restart suricata.service
Check Suricata’s status using systemctl
:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - Suricata Intrusion Detection Service
Loaded: loaded (/usr/lib/systemd/system/suricata.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2021-12-14 16:52:07 UTC; 6s ago
Docs: man:suricata(1)
Process: 44256 ExecStartPre=/bin/rm -f /var/run/suricata.pid (code=exited, status=0/SUCCESS)
Main PID: 44258 (Suricata-Main)
Tasks: 10 (limit: 11188)
Memory: 52.8M
CGroup: /system.slice/suricata.service
└─44258 /sbin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -q 0 -vvv --user suricata
. . .
Dec 14 16:52:07 suricata suricata[44258]: 14/12/2021 -- 16:52:07 - <Notice> - all 4 packet processing threads, 4 management threads initialized, engine started.
Note the highlighted active (running)
line that indicates Suricata restarted successfully.
With this change you are now ready to send traffic to Suricata using Firewalld in the next step.
Now that you have configured Suricata to process traffic in IPS mode, the next step is to direct incoming packets to Suricata. If you followed the prerequisite tutorials for this series and are using a Rocky Linux 8 system, you should have Firewalld installed and enabled.
To add the required rules for Suricata to Firewalld, you will need to run the following commands:
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
These two rules ensure that SSH traffic on IPv4 interfaces will bypass Suricata so that you can connect to your server using SSH, even when Suricata is not running. Without these rules, an incorrect or overly broad signature could block your SSH access. Additionally, if Suricata is stopped, all traffic will be sent to the NFQUEUE
target and then dropped since Suricata is not running.
Add the same rules for IPv6 using the following commands:
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
Next, add FORWARD
rules to ensure that if your server is acting as a gateway for other systems, all that traffic will also go to Suricata for processing.
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -j NFQUEUE
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter FORWARD 0 -j NFQUEUE
The final two INPUT
and OUTPUT
rules send all remaining traffic that is not SSH traffic to Suricata for processing.
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -j NFQUEUE
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -j NFQUEUE
Repeat the commands for IPv6 traffic:
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 1 -j NFQUEUE
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 1 -j NFQUEUE
Now reload Firewalld to make the rules persistent:
- sudo firewall-cmd --reload
Note: If you are using another firewall like iptables you will need to modify these rules to match the format your firewall expects.
At this point in the tutorial you have Suricata configured to run in IPS mode, and your network traffic is being sent to Suricata by default. You will be able to restart your server at any time and your Suricata and firewall rules will be persistent.
The last step in this tutorial is to verify Suricata is dropping traffic correctly.
Now that you have Suricata and your firewall configured to process network traffic, you can test whether Suricata will drop packets that match your custom and other included signatures.
Recall signature sid:2100498
from the previous tutorial, which is modified in this example to drop
matching packets:
drop ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
Find and edit the rule in your /var/lib/suricata/rules/suricata.rules
file to use the drop
action if you have the signature included there. Otherwise, add the rule to your /var/lib/suricata/rules/local.rules
file.
Send Suricata the SIGUSR2
signal to get it to reload its signatures:
- sudo kill -usr2 $(pidof suricata)
Now test the rule using curl
:
- curl --max-time 5 http://testmynids.org/uid/index.html
You should receive an error stating that the request timed out, which indicates Suricata blocked the HTTP response:
Outputcurl: (28) Operation timed out after 5000 milliseconds with 0 out of 39 bytes received
You can confirm that Suricata dropped the HTTP response using jq
to examine the eve.log
file:
- sudo jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
You should receive output like the following:
Output{
. . .
"community_id": "1:SbOgFh2T3DZvwsoyMH4xfxOoVas=",
"alert": {
"action": "blocked",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
"severity": 2,
"metadata": {
"created_at": [
"2010_09_23"
],
"updated_at": [
"2010_09_23"
]
}
},
"http": {
"hostname": "testmynids.org",
"url": "/uid/index.html",
"http_user_agent": "curl/7.61.1",
"http_content_type": "text/html",
"http_method": "GET",
"protocol": "HTTP/1.1",
"status": 200,
"length": 39
},
. . .
The highlighted "action": "blocked"
line confirms that the signature matched, and Suricata dropped or rejected the test HTTP request.
In this tutorial you configured Suricata to block suspicious network traffic using its built-in IPS mode on Rocky Linux 8. You also added custom signatures to examine and block SSH, HTTP, and TLS traffic on non-standard ports. To tie everything together, you also added firewall rules to direct traffic through Suricata for processing.
Now that you have Suricata installed and configured in IPS mode, and can write your own signatures that either alert on or drop suspicious traffic, you can continue monitoring your servers and networks, and refining your signatures.
Once you are satisfied with your Suricata signatures and configuration, you can continue with the last tutorial in this series, which will guide you through sending logs from Suricata to a Security and Information Event Management (SIEM) system built using the Elastic Stack.
]]>In this tutorial you will learn how to configure Suricata’s built-in Intrusion Prevention System (IPS) mode on Debian 11. By default Suricata is configured to run as an Intrusion Detection System (IDS), which only generates alerts and logs suspicious traffic. When you enable IPS mode, Suricata can actively drop suspicious network traffic in addition to generating alerts for further analysis.
Before enabling IPS mode, it is important to check which signatures you have enabled, and their default actions. An incorrectly configured signature, or a signature that is overly broad may result in dropping legitimate traffic to your network, or even block you from accessing your servers over SSH and other management protocols.
In the first part of this tutorial you will check the signatures that you have installed and enabled. You will also learn how to include your own signatures. Once you know which signatures you would like to use in IPS mode, you’ll convert their default action to drop or reject traffic. With your signatures in place, you’ll learn how to send network traffic through Suricata using the netfilter NFQUEUE iptables target, and then generate some invalid network traffic to ensure that Suricata drops it as expected.
If you have been following this tutorial series then you should already have Suricata running on a server. If you still need to install Suricata then you can follow one of these tutorials depending on your server’s operating system:
You should also have the ET Open Ruleset downloaded using the suricata-update
command, and included in your Suricata signatures.
The jq
command line JSON processing tool. If you do not have it installed from a previous tutorial, you can do so using the apt
command:
- sudo apt update
- sudo apt install jq
You may also have custom signatures that you would like to use from the previous Understanding Suricata Signatures tutorial.
The previous tutorials in this series explored how to install and configure Suricata, as well as how to understand signatures. If you would like to create and include your own signatures then you need to edit Suricata’s /etc/suricata/suricata.yaml
file to add them.
First, let’s find your server’s public IPs so that you can use them in your custom signatures. To find your IPs you can use the ip
command:
- ip -brief address show
You should receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 203.0.113.5/20 10.20.0.5/16 2604:a880:cad:d0::dc8:4001/64 fe80::94ad:d4ff:fef9:cee0/64
eth1 UP 10.137.0.2/16 fe80::44a2:ebff:fe91:5187/64
Your public IP address(es) will be similar to the highlighted 203.0.113.5
and 2604:a880:cad:d0::dc8:4001/64
IPs in the output.
Now let’s create the following custom signature to scan for SSH traffic to non-SSH ports and include it in a file called /etc/suricata/rules/local.rules
. Open the file with nano
or your preferred editor:
- sudo nano /etc/suricata/rules/local.rules
Copy and paste the following signature:
alert ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000000;)
alert ssh any any -> 2604:a880:cad:d0::dc8:4001/64 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000001;)
Substitute in your server’s public IP address in place of the 203.0.113.5
and 2604:a880:cad:d0::dc8:4001/64
addresses in the rule. If you are not using IPv6 then you can skip adding that signature in this and the following rules.
You can continue adding custom signatures to this local.rules
file depending on your network and applications. For example, if you wanted to alert about HTTP traffic to non-standard ports, you could use the following signatures:
alert http any any -> 203.0.113.5 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000002;)
alert http any any -> 2604:a880:cad:d0::dc8:4001/64 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000003;)
To add a signature that checks for TLS traffic to ports other than the default 443 for web servers, add the following:
alert tls any any -> 203.0.113.5 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000004;)
alert tls any any -> 2604:a880:cad:d0::dc8:4001/64 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000005;)
When you are done adding signatures, save and close the file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm. If you are using vi
, press ESC
and then :x
then ENTER
to save and exit.
Now that you have some custom signatures defined, edit Suricata’s /etc/suricata/suricata.yaml
configuration file using nano
or your preferred editor to include them:
- sudo nano /etc/suricata/suricata.yaml
Find the rule-files:
portion of the configuration. If you are using nano
use CTRL+_
and then enter the line number 1879
. If you are using vi
enter 1879gg
to go to the line.
Edit the section and add the following highlighted - local.rules
line:
. . .
rule-files:
- suricata.rules
- local.rules
. . .
Save and exit the file. Be sure to validate Suricata’s configuration after adding your rules. To do so run the following command:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on how many rules you have loaded in the default suricata.rules
file. If you find the test takes too long, you can comment out the - suricata.rules
line in the configuration by adding a #
to the beginning of the line and then run your configuration test again.
Once you are satisfied with the signatures that you have created or included using the suricata-update
tool, you can proceed to the next step, where you’ll switch the default action for your signatures from alert or log to actively dropping traffic.
Now that you have your custom signatures tested and working with Suricata, you can change the action to drop
or reject
. When Suricata is operating in IPS mode, these actions will actively block invalid traffic for any matching signature.
These two actions are described in the previous tutorial in this series, Understanding Suricata Signatures. The choice of which action to use is up to you. A drop
action will immediately discard a packet and any subsequent packets that belong to the network flow. A reject
action will send both the client and server a reset packet if the traffic is TCP-based, and an ICMP error packet for any other protocol.
Let’s use the custom rules from the previous section and convert them to use the drop
action, since the traffic that they match is likely to be a network scan, or some other invalid connection.
Open your /etc/suricata/rules/local.rules
file using nano
or your preferred editor and change the alert
action at the beginning of each line in the file to drop
:
drop ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000000;)
drop ssh any any -> 2604:a880:cad:d0::dc8:4001/64 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000001;)
. . .
Repeat the step above for any signatures in /etc/suricata/rules/suricata.rules
that you would like to convert to drop
or reject
mode.
Note: If you ran suricata-update
in the prerequisite tutorial, you may have more than 30,000 signatures included in your suricata.rules file
.
If you convert every signature to drop
or reject
you risk blocking legitimate access to your network or servers. Instead, leave the rules in suricata.rules
for the time being, and add your custom signatures to local.rules
. Suricata will continue to generate alerts for suspicious traffic that is described by the signatures in suricata.rules
while it is running in IPS mode.
After you have a few days or weeks of alerts collected, you can analyze them and choose the relevant signatures to convert to drop
or reject
based on their sid
.
Once you have all the signatures configured with the action that you would like them to take, the next step is to reconfigure and then restart Suricata in IPS mode.
nfqueue
ModeSuricata runs in IDS mode by default, which means it will not actively block network traffic. To switch to IPS mode, you’ll need to modify Suricata’s default settings.
Use the systemctl edit
command to create a new systemd override file:
- sudo systemctl edit suricata.service
Add the following highlighted lines at the start of the file, in between the comments:
### Editing /etc/systemd/system/suricata.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file
[Service]
ExecStart=
ExecStart=/usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /run/suricata.pid -q 0 -vvv
Type=simple
### Lines below this comment will be discarded
. . .
ExecStart=
line clears the default systemd command that starts a service. The next line defines the new ExecStart
command to use.Type=simple
line ensures that systemd can manage the Suricata process when it is running in IPS mode.Save and close the file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm. If you are using vi
, press ESC
and then :x
then ENTER
to save and exit.
Now reload systemd so that it detects the new Suricata settings:
- sudo systemctl daemon-reload
Now you can restart Suricata using systemctl
:
- sudo systemctl restart suricata.service
Check Suricata’s status using systemctl
:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - Suricata IDS/IDP daemon
Loaded: loaded (/lib/systemd/system/suricata.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/suricata.service.d
└─override.conf
Active: active (running) since Wed 2021-12-15 14:35:21 UTC; 38s ago
Docs: man:suricata(8)
man:suricatasc(8)
https://suricata-ids.org/docs/
Main PID: 29890 (Suricata-Main)
Tasks: 10 (limit: 2340)
Memory: 54.9M
CPU: 3.957s
CGroup: /system.slice/suricata.service
└─29890 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /run/suricata.pid -q 0 -vvv
. . .
Dec 15 14:35:21 suricata suricata[29890]: 15/12/2021 -- 14:35:21 - <Notice> - all 4 packet processing threads, 4 management threads initialized, engine started
Note the highlighted active (running)
line that indicates Suricata restarted successfully.
With this change you are now ready to send traffic to Suricata using the UFW firewall in the next step.
Now that you have configured Suricata to process traffic in IPS mode, the next step is to direct incoming packets to Suricata. If you followed the prerequisite tutorials for this series and are using an Ubuntu 20.04 system, you should have the Uncomplicated Firewall (UFW) installed and enabled by default.
To add the required rules for Suricata to UFW, you will need to edit the firewall files in the /etc/ufw/before.rules
and /etc/ufw/before6.rules
directly.
Open the first file using nano
or your preferred editor:
- sudo nano /etc/ufw/before.rules
Near the beginning of the file, insert the following highlighted lines:
. . .
# Don't delete these required lines, otherwise there will be errors
*filter
:ufw-before-input - [0:0]
:ufw-before-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-not-local - [0:0]
# End required lines
## Start Suricata NFQUEUE rules
-I INPUT 1 -p tcp --dport 22 -j NFQUEUE --queue-bypass
-I OUTPUT 1 -p tcp --sport 22 -j NFQUEUE --queue-bypass
-I FORWARD -j NFQUEUE
-I INPUT 2 -j NFQUEUE
-I OUTPUT 2 -j NFQUEUE
## End Suricata NFQUEUE rules
# allow all on loopback
-A ufw-before-input -i lo -j ACCEPT
-A ufw-before-output -o lo -j ACCEPT
. . .
Save and exit the file when you are done editing it. Now add the same lines to the same section in the /etc/ufw/before6.rules
file.
The first two INPUT
and OUTPUT
rules are used to bypass Suricata so that you can connect to your server using SSH, even when Suricata is not running. Without these rules, an incorrect or overly broad signature could block your SSH access. Additionally, if Suricata is stopped, all traffic will be sent to the NFQUEUE
target and then dropped since Suricata is not running.
The next FORWARD
rule ensures that if your server is acting as a gateway for other systems, all that traffic will also go to Suricata for processing.
The final two INPUT
and OUTPUT
rules send all remaining traffic that is not SSH traffic to Suricata for processing.
Restart UFW to load the new rules:
- sudo systemctl restart ufw.service
Note: If you are using another firewall you will need to modify these rules to match the format your firewall expects.
If you are using iptables, then you can insert these rules directly using the iptables
and ip6tables
commands. However, you will need to ensure that the rules are persistent across reboots with a tool like iptables-persistent
.
If you are using firewalld
, then the following rules will direct traffic to Suricata:
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -j NFQUEUE
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 1 -j NFQUEUE
-
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -j NFQUEUE
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter FORWARD 0 -j NFQUEUE
-
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
- sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -j NFQUEUE
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
- sudo firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 1 -j NFQUEUE
-
- sudo firewall-cmd --reload
At this point in the tutorial you have Suricata configured to run in IPS mode, and your network traffic is being sent to Suricata by default. You will be able to restart your server at any time and your Suricata and firewall rules will be persistent.
The last step in this tutorial is to verify Suricata is dropping traffic correctly.
Now that you have Suricata and your firewall configured to process network traffic, you can test whether Suricata will drop packets that match your custom and other included signatures.
Recall signature sid:2100498
from the previous tutorial, which is modified in this example to drop
matching packets:
drop ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
Find and edit the rule in your /etc/suricata/rules/suricata.rules
file to use the drop
action if you have the signature included there. Otherwise, add the rule to your /etc/suricata/rules/local.rules
file.
Send Suricata the SIGUSR2
signal to get it to reload its signatures:
- sudo kill -usr2 $(pidof suricata)
Now test the rule using curl
:
- curl --max-time 5 http://testmynids.org/uid/index.html
You should receive an error stating that the request timed out, which indicates Suricata blocked the HTTP response:
Outputcurl: (28) Operation timed out after 5000 milliseconds with 0 out of 39 bytes received
You can confirm that Suricata dropped the HTTP response using jq
to examine the eve.log
file:
- jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
You should receive output like the following:
Output{
. . .
"community_id": "1:Z+RcUB32putNzIZ38V/kEzZbWmQ=",
"alert": {
"action": "blocked",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
"severity": 2,
"metadata": {
"created_at": [
"2010_09_23"
],
"updated_at": [
"2010_09_23"
]
}
},
"http": {
"hostname": "testmynids.org",
"url": "/uid/index.html",
"http_user_agent": "curl/7.68.0",
"http_content_type": "text/html",
"http_method": "GET",
"protocol": "HTTP/1.1",
"status": 200,
"length": 39
},
. . .
The highlighted "action": "blocked"
line confirms that the signature matched, and Suricata dropped or rejected the test HTTP request.
In this tutorial you configured Suricata to block suspicious network traffic using its built-in IPS mode. You also added custom signatures to examine and block SSH, HTTP, and TLS traffic on non-standard ports. To tie everything together, you also added firewall rules that direct traffic through Suricata for processing.
Now that you have Suricata installed and configured in IPS mode, and can write your own signatures that either alert on or drop suspicious traffic, you can continue monitoring your servers and networks, and refining your signatures.
Once you are satisfied with your Suricata signatures and configuration, you can continue with the last tutorial in this series, which will guide you through sending logs from Suricata to a Security and Information Event Management (SIEM) system built using the Elastic Stack.
]]>In this tutorial you will learn how to configure Suricata’s built-in Intrusion Prevention System (IPS) mode on Ubuntu 20.04. By default Suricata is configured to run as an Intrusion Detection System (IDS), which only generates alerts and logs suspicious traffic. When you enable IPS mode, Suricata can actively drop suspicious network traffic in addition to generating alerts for further analysis.
Before enabling IPS mode, it is important to check which signatures you have enabled, and their default actions. An incorrectly configured signature, or a signature that is overly broad may result in dropping legitimate traffic to your network, or even block you from accessing your servers over SSH and other management protocols.
In the first part of this tutorial you will check the signatures that you have installed and enabled. You will also learn how to include your own signatures. Once you know which signatures you would like to use in IPS mode, you’ll convert their default action to drop or reject traffic. With your signatures in place, you’ll learn how to send network traffic through Suricata using the netfilter NFQUEUE iptables target, and then generate some invalid network traffic to ensure that Suricata drops it as expected.
If you have been following this tutorial series then you should already have Suricata running on an Ubuntu 20.04 server.
If you still need to install Suricata then you can follow How To Install Suricata on Ubuntu 20.04
You should also have the ET Open Ruleset downloaded using the suricata-update
command, and included in your Suricata signatures.
The jq
command line JSON processing tool. If you do not have it installed from a previous tutorial, you can do so using the apt
command:
- sudo apt update
- sudo apt install jq
You may also have custom signatures that you would like to use from the previous Understanding Suricata Signatures tutorial.
The previous tutorials in this series explored how to install and configure Suricata, as well as how to understand signatures. If you would like to create and include your own rules then you need to edit Suricata’s /etc/suricata/suricata.yaml
file to include a custom path to your signatures.
First, let’s find your server’s public IPs so that you can use them in your custom signatures. To find your IPs you can use the ip
command:
- ip -brief address show
You should receive output like the following:
Outputlo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 203.0.113.5/20 10.20.0.5/16 2001:DB8::1/32 fe80::94ad:d4ff:fef9:cee0/64
eth1 UP 10.137.0.2/16 fe80::44a2:ebff:fe91:5187/64
Your public IP address(es) will be similar to the highlighted 203.0.113.5
and 2001:DB8::1/32
IPs in the output.
Now let’s create the following custom signature to scan for SSH traffic to non-SSH ports and include it in a file called /var/lib/suricata/rules/local.rules
. Open the file with nano
or your preferred editor:
- sudo nano /var/lib/suricata/rules/local.rules
Copy and paste the following signature:
alert ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000000;)
alert ssh any any -> 2001:DB8::1/32 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000001;)
Substitute your server’s public IP address in place of the 203.0.113.5
and 2001:DB8::1/32
addresses in the rule. If you are not using IPv6 then you can skip adding that signature in this and the following rules.
You can continue adding custom signatures to this local.rules
file depending on your network and applications. For example, if you wanted to alert about HTTP traffic to non-standard ports, you could use the following signatures:
alert http any any -> 203.0.113.5 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000002;)
alert http any any -> 2001:DB8::1/32 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000003;)
To add a signature that checks for TLS traffic to ports other than the default 443
for web servers, add the following:
alert tls any any -> 203.0.113.5 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000004;)
alert tls any any -> 2001:DB8::1/32 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000005;)
When you are done adding signatures, save and close the file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm. If you are using vi
, press ESC
and then :x
then ENTER
to save and exit.
Now that you have some custom signatures defined, edit Suricata’s /etc/suricata/suricata.yaml
configuration file using nano
or your preferred editor to include them:
- sudo nano /etc/suricata/suricata.yaml
Find the rule-files:
portion of the configuration. If you are using nano
use CTRL+_
and then enter the line number 1879
. If you are using vi
enter 1879gg
to go to the line. The exact location in your file may be different, but you should be in the correct general region of the file.
Edit the section and add the following highlighted - local.rules
line:
. . .
rule-files:
- suricata.rules
- local.rules
. . .
Save and exit the file. Be sure to validate Suricata’s configuration after adding your rules. To do so run the following command:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on how many rules you have loaded in the default suricata.rules
file. If you find the test takes too long, you can comment out the - suricata.rules
line in the configuration by adding a #
to the beginning of the line and then run your configuration test again. Be sure to remove the #
comment if you plan to use the suricata.rules
signature in your final running configuration.
Once you are satisfied with the signatures that you have created or included using the suricata-update
tool, you can proceed to the next step, where you’ll switch the default action for your signatures from alert
or log
to actively dropping traffic.
Now that you have your custom signatures tested and working with Suricata, you can change the action to drop
or reject
. When Suricata is operating in IPS mode, these actions will actively block invalid traffic for any matching signature.
These two actions are described in the previous tutorial in this series, Understanding Suricata Signatures. The choice of which action to use is up to you. A drop
action will immediately discard a packet and any subsequent packets that belong to the network flow. A reject
action will send both the client and server a reset packet if the traffic is TCP-based, and an ICMP error packet for any other protocol.
Let’s use the custom rules from the previous section and convert them to use the drop
action, since the traffic that they match is likely to be a network scan, or some other invalid connection.
Open your /var/lib/suricata/rules/local.rules
file using nano
or your preferred editor and change the alert
action at the beginning of each line in the file to drop
:
- sudo nano /var/lib/suricata/rules/local.rules
drop ssh any any -> 203.0.113.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000000;)
drop ssh any any -> 2001:DB8::1/32 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000001;)
. . .
Repeat the step above for any signatures in /var/lib/suricata/rules/suricata.rules
that you would like to convert to drop
or reject
mode.
Note: If you ran suricata-update
in the prerequisite tutorial, you may have more than 30,000 signatures included in your suricata.rules file
.
If you convert every signature to drop
or reject
you risk blocking legitimate access to your network or servers. Instead, leave the rules in suricata.rules
for the time being, and add your custom signatures to local.rules
. Suricata will continue to generate alerts for suspicious traffic that is described by the signatures in suricata.rules
while it is running in IPS mode.
After you have a few days or weeks of alerts collected, you can analyze them and choose the relevant signatures to convert to drop
or reject
based on their sid
.
Once you have all the signatures configured with the action that you would like them to take, the next step is to reconfigure and then restart Suricata in IPS mode.
nfqueue
ModeSuricata runs in IDS mode by default, which means it will not actively block network traffic. To switch to IPS mode, you’ll need to edit Suricata’s /etc/default/suricata
configuration file.
Open the file in nano
or your preferred editor:
- sudo nano /etc/default/suricata
Find the LISTENMODE=af-packet
line and comment it out by adding a #
to the beginning of the line. Then add a new line LISTENMODE=nfqueue
line that tells Suricata to run in IPS mode.
Your file should have the following highlighted lines in it when you are done editing:
. . .
# LISTENMODE=af-packet
LISTENMODE=nfqueue
. . .
Save and close the file. Now you can restart Suricata using systemctl
:
- sudo systemctl restart suricata.service
Check Suricata’s status using systemctl
:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - LSB: Next Generation IDS/IPS
Loaded: loaded (/etc/init.d/suricata; generated)
Active: active (running) since Wed 2021-12-01 15:54:28 UTC; 2s ago
Docs: man:systemd-sysv-generator(8)
Process: 1452 ExecStart=/etc/init.d/suricata start (code=exited, status=0/SUCCESS)
Tasks: 12 (limit: 9513)
Memory: 63.6M
CGroup: /system.slice/suricata.service
└─1472 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -q 0 -D -vvv
Dec 01 15:54:28 suricata systemd[1]: Starting LSB: Next Generation IDS/IPS...
Dec 01 15:54:28 suricata suricata[1452]: Starting suricata in IPS (nfqueue) mode... done.
Dec 01 15:54:28 suricata systemd[1]: Started LSB: Next Generation IDS/IPS.
Note the highlighted active (running)
line that indicates Suricata restarted successfully. Also note the Starting suricata in IPS (nfqueue) mode... done.
line, which confirms Suricata is now running in IPS mode.
With this change you are now ready to send traffic to Suricata using the UFW firewall in the next step.
Now that you have configured Suricata to process traffic in IPS mode, the next step is to direct incoming packets to Suricata. If you followed the prerequisite tutorials for this series and are using an Ubuntu 20.04 system, you should have the Uncomplicated Firewall (UFW) installed and enabled.
To add the required rules for Suricata to UFW, you will need to edit the firewall files in the /etc/ufw/before.rules
(IPv4 rules) and /etc/ufw/before6.rules
(IPv6) directly.
Open the first file for IPv4 rules using nano
or your preferred editor:
- sudo nano /etc/ufw/before.rules
Near the beginning of the file, insert the following highlighted lines:
. . .
# Don't delete these required lines, otherwise there will be errors
*filter
:ufw-before-input - [0:0]
:ufw-before-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-not-local - [0:0]
# End required lines
## Start Suricata NFQUEUE rules
-I INPUT 1 -p tcp --dport 22 -j NFQUEUE --queue-bypass
-I OUTPUT 1 -p tcp --sport 22 -j NFQUEUE --queue-bypass
-I FORWARD -j NFQUEUE
-I INPUT 2 -j NFQUEUE
-I OUTPUT 2 -j NFQUEUE
## End Suricata NFQUEUE rules
# allow all on loopback
-A ufw-before-input -i lo -j ACCEPT
-A ufw-before-output -o lo -j ACCEPT
. . .
Save and exit the file when you are done editing it. Now add the same highlighted lines to the same section in the /etc/ufw/before6.rules
file:
- sudo nano /etc/ufw/before.rules
Ensure that both files have the same contents. Save and exit the file when you are done editing it.
The first two INPUT
and OUTPUT
rules are used to bypass Suricata so that you can connect to your server using SSH, even when Suricata is not running. Without these rules, an incorrect or overly broad signature could block your SSH access. Additionally, if Suricata is stopped, all traffic will be sent to the NFQUEUE
target and then dropped since Suricata is not running.
The next FORWARD
rule ensures that if your server is acting as a gateway for other systems, all that traffic will also go to Suricata for processing.
The final two INPUT
and OUTPUT
rules send all remaining traffic that is not SSH traffic to Suricata for processing.
Restart UFW to load the new rules:
- sudo systemctl restart ufw.service
Note: If you are using another firewall you will need to modify these rules to match the format your firewall expects.
If you are using iptables, then you can insert these rules directly using the iptables
and ip6tables
commands. However, you will need to ensure that the rules are persistent across reboots with a tool like iptables-persistent
.
If you are using firewalld
, then the following rules will direct traffic to Suricata:
- firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
- firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -j NFQUEUE
- firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
- firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 1 -j NFQUEUE
-
- firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -j NFQUEUE
- firewall-cmd --permanent --direct --add-rule ipv6 filter FORWARD 0 -j NFQUEUE
-
- firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
- firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -j NFQUEUE
- firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
- firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 1 -j NFQUEUE
At this point in the tutorial you have Suricata configured to run in IPS mode, and your network traffic is being sent to Suricata by default. You will be able to restart your server at any time and your Suricata and firewall rules will be persistent.
The last step in this tutorial is to verify Suricata is dropping traffic correctly.
Now that you have Suricata and your firewall configured to process network traffic, you can test whether Suricata will drop packets that match your custom and other included signatures.
Recall signature sid:2100498
from the previous tutorial, which is modified in this example to drop
matching packets:
drop ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
Find and edit the rule in your /var/lib/suricata/rules/suricata.rules
file to use the drop
action if you have the signature included there. Otherwise, add the rule to your /var/lib/suricata/rules/local.rules
file.
Send Suricata the SIGUSR2
signal to get it to reload its signatures:
- sudo kill -usr2 $(pidof suricata)
Now test the rule using curl
:
- curl --max-time 5 http://testmynids.org/uid/index.html
You should receive an error stating that the request timed out, which indicates Suricata blocked the HTTP response:
Outputcurl: (28) Operation timed out after 5000 milliseconds with 0 out of 39 bytes received
You can confirm that Suricata dropped the HTTP response using jq
to examine the eve.log
file:
- jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
You should receive output like the following:
Output{
. . .
"community_id": "1:tw19kjR2LeWacglA094gRfEEuDU=",
"alert": {
"action": "blocked",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
"severity": 2,
"metadata": {
"created_at": [
"2010_09_23"
],
"updated_at": [
"2010_09_23"
]
}
},
"http": {
"hostname": "testmynids.org",
"url": "/uid/index.html",
"http_user_agent": "curl/7.68.0",
"http_content_type": "text/html",
"http_method": "GET",
"protocol": "HTTP/1.1",
"status": 200,
"length": 39
},
. . .
The highlighted "action": "blocked"
line confirms that the signature matched, and Suricata dropped or rejected the test HTTP request.
In this tutorial you configured Suricata to block suspicious network traffic using its built-in IPS mode. You also added custom signatures to examine and block SSH, HTTP, and TLS traffic on non-standard ports. To tie everything together, you also added firewall rules to direct traffic through Suricata for processing.
Now that you have Suricata installed and configured in IPS mode, and can write your own signatures that either alert on or drop suspicious traffic, you can continue monitoring your servers and networks, and refining your signatures.
Once you are satisfied with your Suricata signatures and configuration, you can continue with the last tutorial in this series, which will guide you through sending logs from Suricata to a Security and Information Event Management (SIEM) system built using the Elastic Stack.
]]>The first tutorial in this series explained how to install and configure Suricata. If you followed that tutorial, you also learned how to download and update Suricata rulesets, and how to examine logs for alerts about suspicious activity. However, the rules that you downloaded in that tutorial are numerous, and cover many different protocols, applications, and attack vectors that may not be relevant to your network and servers.
In this tutorial you’ll learn how Suricata signatures are structured, and some important options that are commonly used in most rules. Once you are familiar with how to understand the structure and fields in a signature, you’ll be able to write your own signatures that you can combine with a firewall to alert you about most suspicious traffic to your servers, without needing to use other external rulesets.
This approach to writing and managing rules means that you can use Suricata more efficiently, since it only needs to process the specific rules that you write. Once you have a ruleset that describes the majority of the legitimate and suspicious traffic that you expect to encounter in your network, you can start to selectively drop invalid traffic using Suricata in its active Intrusion Prevention (IPS) mode. The next tutorial in this series will explain how to enable Suricata’s IPS functionality.
For the purposes of this tutorial, you can run Suricata on any system, since signatures generally do not require any particular operating system. If you are following this tutorial series, then you should already have:
suricata-update
command, and included in your Suricata signatures.Suricata signatures can appear complex at first, but once you learn how they are structured, and how Suricata processes them, you’ll be able to create your own rules to suit your network’s requirements.
At a high level, Suricata signatures consist of three parts:
sid
), log message, regular expressions that match the contents of packets, classification type, and other modifiers that can help narrow identify legitimate and suspicious traffic.The general structure of a signature is the following:
ACTION HEADER OPTIONS
The header and options portions of a signature have multiple sections. For example, in the previous tutorial, you tested Suricata using the rule with sid
2100498. Here is the complete rule for reference:
alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
The alert
portion of the signature is the action, the ip any any -> any any
section is the header, and the rest of the signature starting with (msg:GPL ATTACK_RESPONSE...
contains the rule’s options.
In the following sections you’ll examine each part of a Suricata rule in detail.
The first part of the sid:2100498
signature is the action, in this case alert
. The action portion of a Suricata signature specifies the action to take when a packet matches the rule. An action can be one of the following depending on whether Suricata is operating in IDS or IPS mode:
Each Suricata signature has a header section that describes the network protocol, source and destination IP addresses, ports, and direction of traffic. Referring to the example sid:2100498
signature, the header section of the rule is the highlighted ip any any -> any any
portion:
alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
The general format of a rule’s header section is:
<PROTOCOL> <SOURCE IP> <SOURCE PORT> -> <DESTINATION IP> <DESTINATION PORT>
The Protocol can be one of the following:
The Source and Destination fields can be IP addresses or network ranges, or the special value any
, which will match all IP addresses and networks. The ->
arrow indicates the direction of traffic.
Note: Signatures can also use a non-directional marker <>
that will match traffic in both directions. However, the Suricata documentation about directional markers notes that most rules will use the ->
right matching arrow.
If you wanted to alert on malicious outbound traffic (that is traffic leaving your network), then the Source field would be the IP address or network range of your system. The Destination could be a remote system’s IP or network, or the special any
value.
Conversely, if you wanted to generate an alert for malicious incoming traffic, the Source field could be set to any
, and the Destination to your system’s IP address or network range.
You can also specify the TCP or UDP port to examine using the Port fields. Generally, traffic originating from a system is assigned a random port, so the any
value is appropriate for the left side of the ->
indicator. The destination port can also be any
if you plan to examine the contents of every incoming packet, or you can limit a signature to only scan packets on individual ports, like 22 for SSH traffic, or 443 for HTTPS.
The ip any any -> any any
header from sid:2100498
is a generic header that will match all traffic, regardless of protocol, source or destination IPs, or ports. This kind of catch all header is useful when you want to ensure inbound and outbound traffic is checked for suspicious content.
Note that the Source, Destination, and Port fields can also use the special !
negation operator, which will process traffic that does not match the value of the field.
For example, the following signature would make Suricata alert on all incoming SSH packets from any
network that are destined for your network (represented by the 203.0.113.0/24
IP block), that are not destined for port 22:
alert ssh any any -> 203.0.113.0/24 !22 (sid:1000000;)
This alert would not be that useful, since it does not contain any message about the packet, or a classification type. To add extra information to an alert, as well as match on more specific criteria, Suricata rules have an Options section where you can specify a number of additional settings for a signature.
The arguments inside the parenthesis (. . .)
in a Suricata signature contain various options and keyword modifiers that you can use to match on specific parts of a packet, classify a rule, or log custom messages. Whereas a rule’s header arguments operate on packet headers at the IP, port, and protocol level, options match on the data contained inside a packet.
Options in a Suricata rule must be separated by a ;
semicolon, and generally use a key:value format. Some options do not have any settings and only the name needs to be specified in a rule.
Using the example signature from the previous section, you could add the msg
option with a value of SSH traffic detected on non-SSH port
explaining what the alert is about:
alert ssh any any -> 203.0.113.0/24 !22 (msg:"SSH TRAFFIC on non-SSH port"; sid:1000000;)
A full explanation of how you can use each option in a Suricata rule is beyond the scope of this tutorial. The Suricata rules documentation beginning in Section 6.2 describes each keyword option in detail.
However, there are some core options like the content
keyword and various Meta keywords that are used in most signatures, which we’ll examine in the following sections.
Content
KeywordOne of the most important options for any rule is the content
keyword. Recall the example sid:2100498
signature:
alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
The highlighted content:"uid=0|28|root|29|";
portion contains the content
keyword, and the value that Suricata will look for inside a packet. In the case of this example signature, all packets from any IP address on any port will be checked to ensure they do not contain the string value uid=0|28|root|29|
(which in the previous tutorial was used as an example indicating a compromised host).
The content
keyword can be used with most other keywords in Suricata. You can create very specific signatures using combinations of headers, and options that target specific application protocols, and then check packet contents for individual bytes, strings, or matches using regular expressions.
For example, the following signature examines DNS traffic looking for any packet with the contents your_domain.com
and generates an alert:
alert dns any any -> any any (msg:"DNS LOOKUP for your_domain.com"; dns.query; content:"your_domain.com"; sid:1000001;)
However, this rule would not match if the DNS query used the domain YOUR_DOMAIN.COM
, since Suricata defaults to case-sensitive content matching. To make content matches insensitive to case, add the nocase;
keyword to the rule:
alert dns any any -> any any (msg:"DNS LOOKUP for your_domain.com"; dns.query; content:"your_domain.com"; nocase; sid:1000001;)
Now any combination of lower or uppercase letters will still match the content
keyword.
msg
KeywordThe example signatures in this tutorial have all contained msg
keywords with information about a signature. While the msg
option is not required, leaving it blank makes it difficult to understand why an alert or drop action has occurred when examining Suricata’s logs.
A msg
option is designed to be a human-readable text description of an alert. It should be descriptive and add context to an alert so that you or someone else who is analyzing logs understand why the alert was triggered. In the [reference
Keyword](reference
Keyword) section of this tutorial you will learn about the reference
option that you can use to link to more information about a signature and the issue it is designed to detect.
sid
and rev
KeywordsEvery Suricata signature needs a unique Signature ID (sid
). If two rules have the same sid
(in the following example output it is sid:10000000
), Suricata will not start and will instead generate an error like the following:
Example Duplicate sid Error. . .
19/11/2021 -- 01:17:40 - <Error> - [ERRCODE: SC_ERR_DUPLICATE_SIG(176)] - Duplicate signature "drop ssh any any -> 127.0.0.0/8 !22 (msg:"blocked invalid ssh"; sid:10000000;)"
. . .
When you create your own signatures, the range 1000000-1999999 is reserved for custom rules. Suricata’s built-in rules are in the range from 2200000-2299999. Other sid
ranges are documented on the Emerging Threats SID Allocation page.
The sid
option is usually the last part of a Suricata rule. However, if there have been multiple versions of a signature with changes over time, there is a rev
option that is used to specify the version of a rule. For example, the SSH alert from earlier in this tutorial could be changed to only scan for SSH traffic on port 2022:
alert ssh any any -> 203.0.113.0/24 2022 (msg:"SSH TRAFFIC on non-SSH port"; sid:1000000; rev:2;)
The updated signature now includes the rev:2
option, indicating it has been updated from a previous version.
reference
KeywordThe reference
keyword is used in signatures to describe where to find more information about the attack or issue that a rule is meant to detect. For example, if a signature is designed to detect a new kind of exploit or attack method, the reference field can be used to link to a security researcher or company’s website that documents the issue.
The Heartbleed vulnerability in OpenSSL is an example of a widely publicized and researched bug. Suricata comes with signature that is designed to check for incorrect TLS packets and includes a reference to the main Heartbleed CVE entry :
alert tls any any -> any any (msg:"SURICATA TLS invalid heartbeat encountered, possible exploit attempt (heartbleed)"; flow:established; app-layer-event:tls.invalid_heartbeat_message; flowint:tls.anomaly.count,+,1; classtype:protocol-command-decode; reference:cve,2014-0160; sid:2230013; rev:1;)
Note the highlighted reference:cve,2014-0160;
portion of the signature. This reference option tells you or the analyst who is examining alerts from Suricata where to find more information about the particular issue.
The reference option can use any of the prefixes from the /etc/suricata/reference.config
file. For example, url
could be used in place of cve
in the preceding example, with a link directly to the Heartbleed site in place of the 2014-0160
CVE identifier.
classtype
KeywordSuricata can classify traffic according to a preconfigured set of categories that are included when you install the Suricata package with your Linux distribution’s package manager. The default classification file is usually found in /etc/suricata/classification.config
and contains entries like the following:
#
# config classification:shortname,short description,priority
#
config classification: not-suspicious,Not Suspicious Traffic,3
config classification: unknown,Unknown Traffic,3
config classification: bad-unknown,Potentially Bad Traffic, 2
. . .
As indicated by the file header, each classification entry has three fields:
not-suspicious
, unknown
, and bad-unknown
respectively.Not Suspicious Traffic
.In the example sid:2100498
signature, the classtype is classtype:bad-unknown;
, which is highlighted in the following example:
alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
The implicit priority for the signature is 2, since that is the value that is assigned to the bad-unknown
classtype in /etc/suricata/classification.config
. If you would like to override the default priority for a classtype, you can add a priority:n
option to a signature, where n
is a value from 1 to 255.
target
KeywordAnother useful option in Suricata signatures is the target
option. It can be set to one of two values: src_ip
and dest_ip
. The purpose of this option is to correctly identify the source
and target
hosts in Suricata’s alert logs.
For example, the SSH signature from earlier in this tutorial can be enhanced with the target:dest_ip;
option:
alert ssh any any -> 203.0.113.0/24 2022 (msg:"SSH TRAFFIC on non-SSH port"; target:dest_ip; sid:1000000; rev:3;)
This example uses dest_ip
because the rule is designed to check for SSH traffic coming into our example network, so it is the destination. Adding the target
oiption to a rule will result in the following extra fields in the alert
portion of an eve.json
log entry.
. . .
"source": {
"ip": "127.0.0.1",
"port": 35272
},
"target": {
"ip": "203.0.113.1",
"port": 2022
}
. . .
With these entries in Suricata’s logs, they can be sent to a Security Information and Event Management (SIEM) tool to make it easier to search for alerts that might be originating from a common host, or attacks that are directed to a specific target on your network.
In this tutorial you examined each of the main sections that make a complete Suricata signature. Each of the Actions, Header, and Options sections in a rule have multiple options and support scanning packets using many different protocols. While this tutorial did not explore any of the sections in great depth, the structure of rule, and the important fields in the examples should be enough to get started writing your own rules.
If you want to explore complete signatures that include many more options than the ones described in this tutorial, explore the files in the /etc/suricata/rules
directory. If there is a field in a rule that you would like to know more about, the Suricata Rules Documentation is the authoritative resource on what each option and its possible values mean.
Once you are comfortable reading and testing signatures, you can proceed to the next tutorial in this series. In it you will learn how to enable Suricata’s IPS mode, which is used to drop suspicious traffic as opposed to the default IDS mode that only generates alerts.
]]>Suricata is a Network Security Monitoring (NSM) tool that uses sets of community created and user defined signatures (also referred to as rules) to examine and process network traffic. Suricata can generate log events, trigger alerts, and drop traffic when it detects suspicious packets or requests to any number of different services running on a server.
By default Suricata works as a passive Intrusion Detection System (IDS) to scan for suspicious traffic on a server or network. It will generate and log alerts for further investigation. It can also be configured as an active Intrusion Prevention System (IPS) to log, alert, and completely block network traffic that matches specific rules.
You can deploy Suricata on a gateway host in a network to scan all incoming and outgoing network traffic from other systems, or you can run it locally on individual machines in either mode.
In this tutorial you will learn how to install Suricata, and how to customize some of its default settings on Rocky Linux 8 to suit your needs. You will also learn how to download existing sets of signatures (usually referred to as rulesets) that Suricata uses to scan network traffic. Finally you’ll learn how to test whether Suricata is working correctly when it detects suspicious requests and data in a response.
Depending on your network configuration and how you intend to use Suricata, you may need more or less CPU and RAM for your server. Generally, the more traffic you plan to inspect the more resources you should allocate to Suricata. In a production environment plan to use at least 2 CPUs and 4 or 8GB of RAM to start with. From there you can scale up resources according to Suricata’s performance and the amount of traffic that you need to process.
If you plan to use Suricata to protect the server that it is running on, you will need:
Otherwise, if you plan to use Suricata on a gateway host to monitor and protect multiple servers, you will need to ensure that the host’s networking is configured correctly.
If you are using DigitalOcean you can follow this guide on How to Configure a Droplet as a VPC Gateway. Those instructions should work for most CentOS, Fedora, and other RedHat derived servers as well.
To get started installing Suricata, you will need to add the Open Information Security Foundation’s (OISF) software repository information to your Rocky Linux system. You can use the dnf copr enable
command to do this. You will also need to add the Extra Packages for Enterprise Linux (EPEL) repository.
To enable the Community Projects (copr
) subcommand for the dnf
package tool, run the following:
- sudo dnf install 'dnf-command(copr)'
You will be prompted to install some additional dependencies, as well as accept the GPG key for the Rocky Linux distribution. Press y
and ENTER
each time to finish installing the copr
package.
Next run the following command to add the OISF repository to your system and update the list of available packages:
- sudo dnf copr enable @oisf/suricata-6.0
Press y
and ENTER
when you are prompted to confirm that you want to add the repository.
Now add the epel-release
package, which will make some extra dependency packages available for Suricata:
- sudo dnf install epel-release
When you are prompted to import the GPG key, press y
and ENTER
to accept.
Now that you have the required software repositories enabled, you can install the suricata
package using the dnf
command:
- sudo dnf install suricata
When you are prompted to add the GPG key for the OISF repository, press y
and ENTER
. The package and its dependencies will now be downloaded and installed.
Next, enable the suricata.service
so that it will run when your system restarts. Use the systemctl
command to enable it:
- sudo systemctl enable suricata.service
You should receive output like the following indicating the service is enabled:
OutputCreated symlink /etc/systemd/system/multi-user.target.wants/suricata.service → /usr/lib/systemd/system/suricata.service.
Before moving on to the next section of this tutorial, which explains how to configure Suricata, stop the service using systemctl
:
- sudo systemctl stop suricata.service
Stopping Suricata ensures that when you edit and test the configuration file, any changes that you make will be validated and loaded when Suricata starts up again.
The Suricata package from the OISF repositories ships with a configuration file that covers a wide variety of use cases. The default mode for Suricata is IDS mode, so no traffic will be dropped, only logged. Leaving this mode set to the default is a good idea as you learn Suricata. Once you have Suricata configured and integrated into your environment, and have a good idea of the kinds of traffic that it will alert you about, you can opt to turn on IPS mode.
However, the default configuration still has a few settings that you may need to change depending on your environment and needs.
Suricata can include a Community ID field in its JSON output to make it easier to match individual event records to records in datasets generated by other tools.
If you plan to use Suricata with other tools like Zeek or Elasticsearch, adding the Community ID now is a good idea.
To enable the option, open /etc/suricata/suricata.yaml
using vi
or your preferred editor:
- sudo vi /etc/suricata/suricata.yaml
Find line 120 which reads # Community Flow ID
. If you are using vi
type 120gg
to go directly to the line. Below that line is the community-id
key. Set it to true
to enable the setting:
. . .
# Community Flow ID
# Adds a 'community_id' field to EVE records. These are meant to give
# records a predictable flow ID that can be used to match records to
# output of other tools such as Zeek (Bro).
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
# enable/disable the community id feature.
community-id: true
. . .
Now when you examine events, they will have an ID like 1:S+3BA2UmrHK0Pk+u3XH78GAFTtQ=
that you can use to correlate records across different NMS tools.
Save and close the /etc/suricata/suricata.yaml
file. If you are using vi
, you can do so with ESC
and then :x
then ENTER
to save and exit the file.
You may need to override the default network interface or interfaces that you would like Suricata to inspect traffic on. The configuration file that comes with the OISF Suricata package defaults to inspecting traffic on a device called eth0
. If your system uses a different default network interface, or if you would like to inspect traffic on more than one interface, then you will need to change this value.
To determine the device name of your default network interface, you can use the ip
command as follows:
- ip -p -j route show default
The -p
flag formats the output to be more readable, and the -j
flag prints the output as JSON.
You should receive output like the following:
Output[ {
"dst": "default",
"gateway": "203.0.113.254",
"dev": "eth0",
"protocol": "static",
"metric": 100,
"flags": [ ]
} ]
The dev
line indicates the default device. In this example output, the device is the highlighted eth0
interface. Your output may show a device name like ens...
or eno...
. Whatever the name is, make a note of it.
Now you can edit Suricata’s configuration and verify or change the interface name. Open the /etc/suricata/suricata.yaml
configuration file using vi
or your preferred editor:
- sudo vi /etc/suricata/suricata.yaml
Scroll through the file until you come to a line that reads af-packet:
around line 580. If you are using vi
you can also go to the line directly by entering 580gg
. Below that line is the default interface that Suricata will use to inspect traffic. Edit the line to match your interface like the highlighted example that follows:
# Linux high speed capture support
af-packet:
- interface: eth0
# Number of receive threads. "auto" uses the number of cores
#threads: auto
# Default clusterid. AF_PACKET will load balance packets based on flow.
cluster-id: 99
. . .
If you want to inspect traffic on additional interfaces, you can add more - interface: eth...
YAML objects. For example, to add a device named enp0s1
, scroll down to the bottom of the af-packet
section to around line 650. To add a new interface, insert it before the - interface: default
section like the following highlighted example:
# For eBPF and XDP setup including bypass, filter and load balancing, please
# see doc/userguide/capture-hardware/ebpf-xdp.rst for more info.
- interface: enp0s1
cluster-id: 98
- interface: default
#threads: auto
#use-mmap: no
#tpacket-v3: yes
Be sure to choose a unique cluster-id
value for each - interface
object.
Keep your editor open and proceed to the next section where you will configure live rule reloading. If you do not want to enable that setting then you can save and close the /etc/suricata/suricata.yaml
file. If you are using vi
, you can do so with ESC
, then :x
and ENTER
to save and quit.
Suricata supports live rule reloading, which means you can add, remove, and edit rules without needing to restart the running Suricata process. To enable the live reload option, scroll to the bottom of the configuration file and add the following lines:
. . .
detect-engine:
- rule-reload: true
With this setting in place, you will be able to send the SIGUSR2
system signal to the running process, and Suricata will reload any changed rules into memory.
A command like the following will notify the Suricata process to reload its rulesets, without restarting the process:
- sudo kill -usr2 $(pidof suricata)
The $(pidof suricata)
portion of the command invokes a subshell, and finds the process ID of the running Suricata daemon. The beginning sudo kill -usr2
part of the command uses the kill
utility to send the SIGUSR2
signal to the process ID that is reported back by the subshell.
You can use this command any time you run suricata-update
or when you add or edit your own custom rules.
Save and close the /etc/suricata/suricata.yaml
file. If you are using vi
, you can do so with ESC
, then :x
and ENTER
to confirm.
At this point in the tutorial, if you were to start Suricata, you would receive a warning message like the following in the logs that there are no loaded rules:
Output<Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules
By default the Suricata package includes a limited set of detection rules (in the /etc/suricata/rules
directory), so turning Suricata on at this point would only detect a limited amount of bad traffic.
Suricata includes a tool called suricata-update
that can fetch rulesets from external providers. Run it as follows to download an up to date ruleset for your Suricata server:
- sudo suricata-update
You should receive output like the following:
Output19/10/2021 -- 19:31:03 - <Info> -- Using data-directory /var/lib/suricata.
19/10/2021 -- 19:31:03 - <Info> -- Using Suricata configuration /etc/suricata/suricata.yaml
19/10/2021 -- 19:31:03 - <Info> -- Using /usr/share/suricata/rules for Suricata provided rules.
. . .
19/10/2021 -- 19:31:03 - <Info> -- No sources configured, will use Emerging Threats Open
19/10/2021 -- 19:31:03 - <Info> -- Fetching https://rules.emergingthreats.net/open/suricata-6.0.3/emerging.rules.tar.gz.
100% - 3062850/3062850
. . .
19/10/2021 -- 19:31:06 - <Info> -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 31011; enabled: 23649; added: 31011; removed 0; modified: 0
19/10/2021 -- 19:31:07 - <Info> -- Writing /var/lib/suricata/rules/classification.config
19/10/2021 -- 19:31:07 - <Info> -- Testing with suricata -T.
19/10/2021 -- 19:31:32 - <Info> -- Done.
The highlighted lines indicate suricata-update
has fetched the free Emerging Threats ET Open Rules, and saved them to Suricata’s /var/lib/suricata/rules/suricata.rules
file. It also indicates the number of rules that were processed, in this example, 31011 were added and of those 23649 were enabled.
The suricata-update
tool can fetch rules from a variety of free and commercial ruleset providers. Some rulesets like the ET Open set that you already added are available for free, while others require a paid subscription.
You can list the default set of rule providers using the list-sources
flag to suricata-update
like this:
- sudo suricata-update list-sources
You will receive a list of sources like the following:
Output. . .
19/10/2021 -- 19:27:34 - <Info> -- Adding all sources
19/10/2021 -- 19:27:34 - <Info> -- Saved /var/lib/suricata/update/cache/index.yaml
Name: et/open
Vendor: Proofpoint
Summary: Emerging Threats Open Ruleset
License: MIT
. . .
For example, if you wanted to include the tgreen/hunting
ruleset, you could enable it using the following command:
- sudo suricata-update enable-source tgreen/hunting
Then run suricata-update
again and the new set of rules will be added, in addition to the existing ET Open rules and any others that you have downloaded.
Now that you have edited Suricata’s configuration file to include the optional Community ID, specify the default network interface, and enabled live rule reloading, it is a good idea to test the configuration.
Suricata has a built-in test mode that will check the configuration file and any included rules for validity. Validate your changes from the previous section using the -T
flag to run Suricata in test mode. The -v
flag will print some additional information, and the -c
flag tells Suricata where to find its configuration file:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on the amount of CPU you have allocated to Suricata and the number of rules that you have added, so be prepared to wait for a minute or two for it to complete.
With the default ET Open ruleset you should receive output like the following:
Output21/10/2021 -- 15:00:40 - <Info> - Running suricata under test mode
21/10/2021 -- 15:00:40 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:00:40 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:00:40 - <Info> - fast output device (regular) initialized: fast.log
21/10/2021 -- 15:00:40 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:00:40 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:00:46 - <Info> - 1 rule files processed. 23879 rules successfully loaded, 0 rules failed
21/10/2021 -- 15:00:46 - <Info> - Threshold config parsed: 0 rule(s) found
21/10/2021 -- 15:00:47 - <Info> - 23882 signatures processed. 1183 are IP-only rules, 4043 are inspecting packet payload, 18453 inspect application layer, 107 are decoder event only
21/10/2021 -- 15:01:13 - <Notice> - Configuration provided was successfully loaded. Exiting.
21/10/2021 -- 15:01:13 - <Info> - cleaning up signature grouping structure... complete
If there is an error in your configuration file, then the test mode will generate a specific error code and message that you can use to help troubleshoot. For example, including a rules file that does not exist called test.rules
would generate an error like the following:
Output21/10/2021 -- 15:10:15 - <Info> - Running suricata under test mode
21/10/2021 -- 15:10:15 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:10:15 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:10:15 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:10:15 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:10:21 - <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/test.rules
With that error you could then edit your configuration file to include the correct path, or fix invalid variables and configuration options.
Once your Suricata test mode run completes successfully you can move to the next step, which is starting Suricata in daemon mode.
Now that you have a valid Suricata configuration and ruleset, you can start the Suricata server. Run the following systemctl
command:
- sudo systemctl start suricata.service
You can examine the status of the service using the systemctl status
command:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - Suricata Intrusion Detection Service
Loaded: loaded (/usr/lib/systemd/system/suricata.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2021-10-21 18:22:56 UTC; 1min 57s ago
Docs: man:suricata(1)
Process: 24588 ExecStartPre=/bin/rm -f /var/run/suricata.pid (code=exited, status=0/SUCCESS)
Main PID: 24590 (Suricata-Main)
Tasks: 1 (limit: 23473)
Memory: 80.2M
CGroup: /system.slice/suricata.service
└─24590 /sbin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -i eth0 --user suricata
Oct 21 18:22:56 suricata systemd[1]: Starting Suricata Intrusion Detection Service..
Oct 21 18:22:56 suricata systemd[1]: Started Suricata Intrusion Detection Service.
. . .
As with the test mode command, it will take Suricata a minute or two to load and parse all of the rules. You can use the tail
command to watch for a specific message in Suricata’s logs that indicates it has finished starting:
- sudo tail -f /var/log/suricata/suricata.log
You will receive a number of lines of output, and the terminal may appear to be stuck while Suricata loads. Continue waiting for output until you receive a line like the following:
Output19/10/2021 -- 19:22:39 - <Info> - All AFP capture threads are running.
This line indicates Suricata is running and ready to inspect traffic. You can exit the tail
command using CTRL+C
.
Now that you have verified that Suricata is running, the next step in this tutorial is to check whether Suricata detects a request to a test URL that is designed to generate an alert.
The ET Open ruleset that you downloaded contains over 30000 rules. A full explanation of how Suricata rules work, and how to construct them is beyond the scope of this introductory tutorial. A subsequent tutorial in this series will explain how rules work and how to build your own.
For the purposes of this tutorial, testing whether Suricata is detecting suspicious traffic with the configuration that you generated is sufficient. The Suricata Quickstart recommends testing the ET Open rule with number 2100498
using the curl
command.
Run the following to generate an HTTP request, which will return a response that matches Suricata’s alert rule:
- curl http://testmynids.org/uid/index.html
The curl
command will output a response like the following:
Outputuid=0(root) gid=0(root) groups=0(root)
This example response data is designed to trigger an alert, by pretending to return the output of a command like id
that might run on a compromised remote system via a web shell.
Now you can check Suricata’s logs for a corresponding alert. There are two logs that are enabled with the default Suricata configuration. The first is in /var/log/suricata/fast.log
and the second is a machine readable log in /var/log/suricata/eve.log
.
/var/log/suricata/fast.log
To check for a log entry in /var/log/suricata/fast.log
that corresponds to your curl
request use the grep
command. Using the 2100498
rule identifier from the Quickstart documentation, search for entries that match it using the following command:
- grep 2100498 /var/log/suricata/fast.log
If your request used IPv6, then you should receive output like the following, where 2001:DB8::1
is your system’s public IPv6 address:
Output10/21/2021-18:35:54.950106 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 2600:9000:2000:4400:0018:30b3:e400:93a1:80 -> 2001:DB8::1:34628
If your request used IPv4, then your log should have a message like this, where 203.0.113.1
is your system’s public IPv4 address:
Output10/21/2021-18:35:57.247239 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 204.246.178.81:80 -> 203.0.113.1:36364
Note the highlighted 2100498
value in the output, which is the Signature ID (sid
) that Suricata uses to identify a rule.
/var/log/suricata/eve.log
Suricata also logs events to /var/log/suricata/eve.log
(nicknamed the EVE log) using JSON to format entries.
The Suricata documentation recommends using the jq
utility to read and filter the entries in this file. Install jq
if you do not have it on your system using the following dnf
command:
- sudo dnf install jq
Once you have jq
installed, you can filter the events in the EVE log by searching for the 2100498
signature with the following command:
- jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
The command examines each JSON entry and prints any that have an alert
object, with a signature_id
key that matches the 2100498
value that you are searching for. The output will resemble the following:
Output{
"timestamp": "2021-10-21T19:42:47.368856+0000",
"flow_id": 775889108832281,
"in_iface": "eth0",
"event_type": "alert",
"src_ip": "203.0.113.1",
"src_port": 80,
"dest_ip": "147.182.148.159",
"dest_port": 38920,
"proto": "TCP",
"community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=",
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
. . .
}
Note the highlighted "signature_id": 2100498,
line, which is the key that jq
is searching for. Also note the highlighted "community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=",
line in the JSON output. This key is the generated Community Flow Identifier that you enabled in Suricata’s configuration file.
Each alert will generate a unique Community Flow Identifier. Other NMS tools can also generate the same identifier to enable cross-referencing a Suricata alert with output from other tools.
A matching log entry in either log file means that Suricata successfully inspected the network traffic, matched it against a detection rule, and generated an alert for subsequent analysis or logging. A future tutorial in this series will explore how to send Suricata alerts to a Security Information Event Management (SIEM) system for further processing.
Once you have alerts set up and tested, you can choose how you want to handle them. For some use cases, logging alerts for auditing purposes may be sufficient; or you may prefer to take a more active approach to blocking traffic from systems that generate repeated alerts.
If you would like to block traffic based on the alerts that Suricata generates, one approach is to use entries from the EVE log and then add firewall rules to restrict access to your system or systems. You can use the jq
tool to extract specific fields from an alert, and then add UFW or IPtables rules to block requests.
Again, this example is a hypothetical scenario using deliberately crafted request and response data. Your knowledge of the systems and protocols that your environment should be able to access is essential in order to determine which traffic is legitimate and which can be blocked.
In this tutorial you installed Suricata from the OISF software repositories. Installing Suricata this way ensures that you can receive updates whenever a new version of Suricata is released. After installing Suricata you edited the default configuration to add a Community Flow ID for use with other security tools. You also enabled live rule reloading, and downloaded an initial set of rules.
Once you validated Suricata’s configuration, you started the process and generated some test HTTP traffic. You verified that Suricata could detect suspicious traffic by examining both of the default logs to make sure they contained an alert corresponding to the rule you were testing.
For more information about Suricata, visit the official Suricata Site. For more details on any of the configuration options that you configured in this tutorial, refer to the Suricata User Guide.
Now that you have Suricata installed and configured, you can continue to the next tutorial in this series Understanding Suricata Signatures where you’ll explore how to write your own custom Suricata rules. You’ll learn about different ways to create alerts, or even how to drop traffic entirely, based on criteria like invalid TCP/IP packets, the contents of DNS queries, HTTP requests and responses, and even TLS handshakes.
]]>Suricata is a Network Security Monitoring (NSM) tool that uses sets of community created and user defined signatures (also referred to as rules) to examine and process network traffic. Suricata can generate log events, trigger alerts, and drop traffic when it detects suspicious packets or requests to any number of different services running on a server.
By default Suricata works as a passive Intrusion Detection System (IDS) to scan for suspicious traffic on a server or network. It will generate and log alerts for further investigation. It can also be configured as an active Intrusion Prevention System (IPS) to log, alert, and completely block network traffic that matches specific rules.
You can deploy Suricata on a gateway host in a network to scan all incoming and outgoing network traffic from other systems, or you can run it locally on individual machines in either mode.
In this tutorial you will learn how to install Suricata, and how to customize some of its default settings on Debian 11 to suit your needs. You will also learn how to download existing sets of signatures (usually referred to as rulesets) that Suricata uses to scan network traffic. Finally you’ll learn how to test whether Suricata is working correctly when it detects suspicious requests and data in a response.
Depending on your network configuration and how you intend to use Suricata, you may need more or less CPU and RAM for your server. Generally, the more traffic you plan to inspect the more resources you should allocate to Suricata. In a production environment plan to use at least 2 CPUs and 4 or 8GB of RAM to start with. From there you can scale up resources according to Suricata’s performance and the amount of traffic that you need to process.
If you plan to use Suricata to protect the server that it is running on, you will need:
Otherwise, if you plan to use Suricata on a gateway host to monitor and protect multiple servers, you will need to ensure that the host’s networking is configured correctly.
If you are using DigitalOcean you can follow this guide on How to Configure a Droplet as a VPC Gateway. Those instructions should work for most Debian and Ubuntu servers as well.
To get started installing Suricata, you will need to update the list of available packages on your Debian system. You can use the apt update
command to do this:
- sudo apt update
Now you can install the suricata
package using the apt
command:
- sudo apt install suricata
Now that the package is installed, enable the suricata.service
so that it will run when your system restarts. Use the systemctl
command to enable it:
- sudo systemctl enable suricata.service
You should receive output like the following indicating the service is enabled:
OutputSynchronizing state of suricata.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable suricata
. . .
Before moving on to the next section of this tutorial, which explains how to configure Suricata, stop the service using systemctl
:
- sudo systemctl stop suricata.service
Stopping Suricata ensures that when you edit and test the configuration file, any changes that you make will be validated and loaded when Suricata starts up again.
The Suricata package from the OISF repositories ships with a configuration file that covers a wide variety of use cases. The default mode for Suricata is IDS mode, so no traffic will be dropped, only logged. Leaving this mode set to the default is a good idea as you learn Suricata. Once you have Suricata configured and integrated into your environment, and have a good idea of the kinds of traffic that it will alert you about, you can opt to turn on IPS mode.
However, the default configuration still has a few settings that you may need to change depending on your environment and needs.
Suricata can include a Community ID field in its JSON output to make it easier to match individual event records to records in datasets generated by other tools.
If you plan to use Suricata with other tools like Zeek or Elasticsearch, adding the Community ID now is a good idea.
To enable the option, open /etc/suricata/suricata.yaml
using nano or your preferred editor:
- sudo nano /etc/suricata/suricata.yaml
Find line 120 which reads # Community Flow ID
. If you are using nano
type CTRL+_
and then 120
when prompted to enter a line number. Below that line is the community-id
key. Set it to true
to enable the setting:
. . .
# Community Flow ID
# Adds a 'community_id' field to EVE records. These are meant to give
# records a predictable flow ID that can be used to match records to
# output of other tools such as Zeek (Bro).
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
# enable/disable the community id feature.
community-id: true
. . .
Now when you examine events, they will have an ID like 1:S+3BA2UmrHK0Pk+u3XH78GAFTtQ=
that you can use to correlate records across different NMS tools.
Save and close the /etc/suricata/suricata.yaml
file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
You may need to override the default network interface or interfaces that you would like Suricata to inspect traffic on. The configuration file that comes with the OISF Suricata package defaults to inspecting traffic on a device called eth0
. If your system uses a different default network interface, or if you would like to inspect traffic on more than one interface, then you will need to change this value.
To determine the device name of your default network interface, you can use the ip
command as follows:
- ip -p -j route show default
The -p
flag formats the output to be more readable, and the -j
flag prints the output as JSON.
You should receive output like the following:
Output[ {
"dst": "default",
"gateway": "203.0.113.254",
"dev": "eth0",
"flags": [ "onlink" ]
} ]
The dev
line indicates the default device. In this example output, the device is the highlighted eth0
interface. Your output may show a device name like ens...
or eno...
. Whatever the name is, make a note of it.
Now you can edit Suricata’s configuration and verify or change the interface name. Open the /etc/suricata/suricata.yaml
configuration file using nano
or your preferred editor:
- sudo nano /etc/suricata/suricata.yaml
Scroll through the file until you come to a line that reads af-packet:
around line 580. If you are using nano
you can also go to the line directly by entering CTRL+_
and typing the line number. Below that line is the default interface that Suricata will use to inspect traffic. Edit the line to match your interface like the highlighted example that follows:
# Linux high speed capture support
af-packet:
- interface: eth0
# Number of receive threads. "auto" uses the number of cores
#threads: auto
# Default clusterid. AF_PACKET will load balance packets based on flow.
cluster-id: 99
. . .
If you want to inspect traffic on additional interfaces, you can add more - interface: eth...
YAML objects. For example, to add a device named enp0s1
, scroll down to the bottom of the af-packet
section to around line 650. To add a new interface, insert it before the -interface: default
section like the following highlighted example:
# For eBPF and XDP setup including bypass, filter and load balancing, please
# see doc/userguide/capture-hardware/ebpf-xdp.rst for more info.
- interface: enp0s1
cluster-id: 98
- interface: default
#threads: auto
#use-mmap: no
#tpacket-v3: yes
Be sure to choose a unique cluster-id
value for each - interface
object.
Keep your editor open and proceed to the next section where you will configure live rule reloading. If you do not want to enable that setting then you can save and close the /etc/suricata/suricata.yaml
file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Suricata supports live rule reloading, which means you can add, remove, and edit rules without needing to restart the running Suricata process. To enable the live reload option, scroll to the bottom of the configuration file and add the following lines:
. . .
detect-engine:
- rule-reload: true
With this setting in place, you will be able to send the SIGUSR2
system signal to the running process, and Suricata will reload any changed rules into memory.
A command like the following will notify the Suricata process to reload its rulesets, without restarting the process:
- sudo kill -usr2 $(pidof suricata)
The $(pidof suricata)
portion of the command invokes a subshell, and finds the process ID of the running Suricata daemon. The beginning sudo kill -usr2
part of the command uses the kill
utility to send the SIGUSR2
signal to the process ID that is reported back by the subshell.
You can use this command any time you run suricata-update
or when you add or edit your own custom rules.
Save and close the /etc/suricata/suricata.yaml
file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
At this point in the tutorial, if you were to start Suricata, you would receive a warning message like the following in the logs that there are no loaded rules:
Output<Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /etc/suricata/rules/suricata.rules
By default the Suricata package includes a limited set of detection rules (in the /etc/suricata/rules
directory), so turning Suricata on at this point would only detect a limited amount of bad traffic.
Suricata includes a tool called suricata-update
that can fetch rulesets from external providers. Run it as follows to download an up to date ruleset for your Suricata server:
- sudo suricata-update -o /etc/suricata/rules
The -o /etc/suricata/rules
portion of the command instructs the update tool to save the rules to a specific directory. You should receive output like the following:
Output19/10/2021 -- 19:31:03 - <Info> -- Using data-directory /var/lib/suricata.
19/10/2021 -- 19:31:03 - <Info> -- Using Suricata configuration /etc/suricata/suricata.yaml
19/10/2021 -- 19:31:03 - <Info> -- Using /etc/suricata/rules for Suricata provided rules.
. . .
19/10/2021 -- 19:31:03 - <Info> -- No sources configured, will use Emerging Threats Open
19/10/2021 -- 19:31:03 - <Info> -- Fetching https://rules.emergingthreats.net/open/suricata-6.0.1/emerging.rules.tar.gz.
100% - 3052046/3052046
. . .
19/10/2021 -- 19:31:06 - <Info> -- Writing rules to /etc/suricata/rules/suricata.rules: total: 31063; enabled: 23700; added: 31063; removed 0; modified: 0
19/10/2021 -- 19:31:07 - <Info> -- Writing /etc/suricata/rules/classification.config
19/10/2021 -- 19:31:07 - <Info> -- Testing with suricata -T.
19/10/2021 -- 19:31:32 - <Info> -- Done.
The highlighted lines indicate suricata-update
has fetched the free Emerging Threats ET Open Rules, and saved them to Suricata’s /etc/suricata/rules/suricata.rules
file. It also indicates the number of rules that were processed, in this example, 31011 were added and of those 23649 were enabled.
The suricata-update
tool can fetch rules from a variety of free and commercial ruleset providers. Some rulesets like the ET Open set that you already added are available for free, while others require a paid subscription.
You can list the default set of rule providers using the list-sources
flag to suricata-update
like this:
- sudo suricata-update list-sources
You will receive a list of sources like the following:
Output. . .
19/10/2021 -- 19:27:34 - <Info> -- Adding all sources
19/10/2021 -- 19:27:34 - <Info> -- Saved /var/lib/suricata/update/cache/index.yaml
Name: et/open
Vendor: Proofpoint
Summary: Emerging Threats Open Ruleset
License: MIT
. . .
For example, if you wanted to include the tgreen/hunting
ruleset, you could enable it using the following command:
- sudo suricata-update enable-source tgreen/hunting -o /etc/suricata/rules
Then run the suricata-update
command with the -o /etc/suricata/rules
flag again and the new set of rules will be added, in addition to the existing ET Open rules and any others that you have downloaded.
Now that you have edited Suricata’s configuration file to include the optional Community ID, specify the default network interface, and enabled live rule reloading, it is a good idea to test the configuration.
Suricata has a built-in test mode that will check the configuration file and any included rules for validity. Validate your changes from the previous section using the -T
flag to run Suricata in test mode. The -v
flag will print some additional information, and the -c
flag tells Suricata where to find its configuration file:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on the amount of CPU you have allocated to Suricata and the number of rules that you have added, so be prepared to wait for a minute or two for it to complete.
With the default ET Open ruleset you should receive output like the following:
Output21/10/2021 -- 15:00:40 - <Info> - Running suricata under test mode
21/10/2021 -- 15:00:40 - <Notice> - This is Suricata version 6.0.1 RELEASE running in SYSTEM mode
21/10/2021 -- 15:00:40 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:00:40 - <Info> - fast output device (regular) initialized: fast.log
21/10/2021 -- 15:00:40 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:00:40 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:00:46 - <Info> - 1 rule files processed. 23700 rules successfully loaded, 0 rules failed
21/10/2021 -- 15:00:46 - <Info> - Threshold config parsed: 0 rule(s) found
21/10/2021 -- 15:00:47 - <Info> - 23703 signatures processed. 1175 are IP-only rules, 3974 are inspecting packet payload, 18355 inspect application layer, 104 are decoder event only
21/10/2021 -- 15:01:13 - <Notice> - Configuration provided was successfully loaded. Exiting.
21/10/2021 -- 15:01:13 - <Info> - cleaning up signature grouping structure... complete
If there is an error in your configuration file, then the test mode will generate a specific error code and message that you can use to help troubleshoot. For example, including a rules file that does not exist called test.rules
would generate an error like the following:
Output21/10/2021 -- 15:10:15 - <Info> - Running suricata under test mode
21/10/2021 -- 15:10:15 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:10:15 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:10:15 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:10:15 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:10:21 - <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /etc/suricata/rules/test.rules
With that error you could then edit your configuration file to include the correct path, or fix invalid variables and configuration options.
Once your Suricata test mode run completes successfully you can move to the next step, which is starting Suricata in daemon mode.
Now that you have a valid Suricata configuration and ruleset, you can start the Suricata server. Run the following systemctl
command:
- sudo systemctl start suricata.service
You can examine the status of the service using the systemctl status
command:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - Suricata IDS/IDP daemon
Loaded: loaded (/lib/systemd/system/suricata.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-10-29 19:46:02 UTC; 6s ago
Docs: man:suricata(8)
man:suricatasc(8)
https://suricata-ids.org/docs/
Process: 4278 ExecStart=/usr/bin/suricata -D --af-packet -c /etc/suricata/suricata.yaml --pidfile /run/suricata.pid (code=exited, status=0/SUCCESS)
Main PID: 4279 (Suricata-Main)
Tasks: 1 (limit: 4678)
Memory: 206.0M
CPU: 6.273s
CGroup: /system.slice/suricata.service
└─4279 /usr/bin/suricata -D --af-packet -c /etc/suricata/suricata.yaml --pidfile /run/suricata.pid
Oct 29 19:46:02 suricata systemd[1]: Starting Suricata IDS/IDP daemon...
Oct 29 19:46:02 suricata suricata[4278]: 29/10/2021 -- 19:46:02 - <Notice> - This is Suricata version 6.0.1 RELEASE running in SYSTEM mode
Oct 29 19:46:02 suricata systemd[1]: Started Suricata IDS/IDP daemon.
As with the test mode command, it will take Suricata a minute or two to load and parse all of the rules. You can use the tail
command to watch for a specific message in Suricata’s logs that indicates it has finished starting:
- sudo tail -f /var/log/suricata/suricata.log
You will receive a number of lines of output, and the terminal may appear to be stuck while Suricata loads. Continue waiting for output until you receive a line like the following:
Output29/10/2021 -- 19:46:34 - <Info> - All AFP capture threads are running.
This line indicates Suricata is running and ready to inspect traffic. You can exit the tail
command using CTRL+C
.
Now that you have verified that Suricata is running, the next step in this tutorial is to check whether Suricata detects a request to a test URL that is designed to generate an alert.
The ET Open ruleset that you downloaded contains over 30000 rules. A full explanation of how Suricata rules work, and how to construct them is beyond the scope of this introductory tutorial. A subsequent tutorial in this series will explain how rules work and how to build your own.
For the purposes of this tutorial, testing whether Suricata is detecting suspicious traffic with the configuration that you generated is sufficient. The Suricata Quickstart recommends testing the ET Open rule with number 2100498
using the curl
command.
Run the following to generate an HTTP request, which will return a response that matches Suricata’s alert rule:
- curl http://testmynids.org/uid/index.html
The curl
command will output a response like the following:
Outputuid=0(root) gid=0(root) groups=0(root)
This example response data is designed to trigger an alert, by pretending to return the output of a command like id
that might run on a compromised remote system via a web shell.
Now you can check Suricata’s logs for a corresponding alert. There are two logs that are enabled with the default Suricata configuration. The first is in /var/log/suricata/fast.log
and the second is a machine readable log in /var/log/suricata/eve.log
.
/var/log/suricata/fast.log
To check for a log entry in /var/log/suricata/fast.log
that corresponds to your curl
request use the grep
command. Using the 2100498
rule identifier from the Quickstart documentation, search for entries that match it using the following command:
- grep 2100498 /var/log/suricata/fast.log
If your request used IPv6, then you should receive output like the following, where 2001:DB8::1
is your system’s public IPv6 address:
Output10/29/2021-19:47:33.631122 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 2600:9000:2000:4400:0018:30b3:e400:93a1:80 -> 2001:DB8::1:34628
If your request used IPv4, then your log should have a message like this, where 203.0.113.1
is your system’s public IPv4 address:
Output10/29/2021-19:48:05.832461 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 204.246.178.81:80 -> 203.0.113.1:36364
Note the highlighted 2100498
value in the output, which is the Signature ID (sid
) that Suricata uses to identify a rule.
/var/log/suricata/eve.log
Suricata also logs events to /var/log/suricata/eve.log
(nicknamed the EVE log) using JSON to format entries.
The Suricata documentation recommends using the jq
utility to read and filter the entries in this file. Install jq
if you do not have it on your system using the following apt
command:
- sudo apt install jq
Once you have jq
installed, you can filter the events in the EVE log by searching for the 2100498
signature with the following command:
- jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
The command examines each JSON entry and prints any that have an alert
object, with a signature_id
key that matches the 2100498
value that you are searching for. The output will resemble the following:
Output{
"timestamp": "2021-10-29T19:48:05.832461+0000",
"flow_id": 666167948976574,
"in_iface": "eth0",
"event_type": "alert",
"src_ip": "203.0.113.1",
"src_port": 80,
"dest_ip": "147.182.148.159",
"dest_port": 38920,
"proto": "TCP",
"community_id": "1:orJE+IStTM2bjccd9RzqMmjYceE=",
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
. . .
}
Note the highlighted "signature_id": 2100498,
line, which is the key that jq
is searching for. Also note the highlighted "community_id": "1:orJE+IStTM2bjccd9RzqMmjYceE=",
line in the JSON output. This key is the generated Community Flow Identifier that you enabled in Suricata’s configuration file.
Each alert will generate a unique Community Flow Identifier. Other NMS tools can also generate the same identifier to enable cross-referencing a Suricata alert with output from other tools.
A matching log entry in either log file means that Suricata successfully inspected the network traffic, matched it against a detection rule, and generated an alert for subsequent analysis or logging. A future tutorial in this series will explore how to send Suricata alerts to a Security Information Event Management (SIEM) system for further processing.
Once you have alerts set up and tested, you can choose how you want to handle them. For some use cases, logging alerts for auditing purposes may be sufficient; or you may prefer to take a more active approach to blocking traffic from systems that generate repeated alerts.
If you would like to block traffic based on the alerts that Suricata generates, one approach is to use entries from the EVE log and then add firewall rules to restrict access to your system or systems. You can use the jq
tool to extract specific fields from an alert, and then add UFW or IPtables rules to block requests.
Again, this example is a hypothetical scenario using deliberately crafted request and response data. Your knowledge of the systems and protocols that your environment should be able to access is essential in order to determine which traffic is legitimate and which can be blocked.
In this tutorial you installed Suricata from the OISF software repositories. Installing Suricata this way ensures that you can receive updates whenever a new version of Suricata is released. After installing Suricata you edited the default configuration to add a Community Flow ID for use with other security tools. You also enabled live rule reloading, and downloaded an initial set of rules.
Once you validated Suricata’s configuration, you started the process and generated some test HTTP traffic. You verified that Suricata could detect suspicious traffic by examining both of the default logs to make sure they contained an alert corresponding to the rule you were testing.
For more information about Suricata, visit the official Suricata Site. For more details on any of the configuration options that you configured in this tutorial, refer to the Suricata User Guide.
Now that you have Suricata installed and configured, you can continue to the next tutorial in this series Understanding Suricata Signatures where you’ll explore how to write your own custom Suricata rules. You’ll learn about different ways to create alerts, or even how to drop traffic entirely, based on criteria like invalid TCP/IP packets, the contents of DNS queries, HTTP requests and responses, and even TLS handshakes.
]]>Suricata is a Network Security Monitoring (NSM) tool that uses sets of community created and user defined signatures (also referred to as rules) to examine and process network traffic. Suricata can generate log events, trigger alerts, and drop traffic when it detects suspicious packets or requests to any number of different services running on a server.
By default Suricata works as a passive Intrusion Detection System (IDS) to scan for suspicious traffic on a server or network. It will generate and log alerts for further investigation. It can also be configured as an active Intrusion Prevention System (IPS) to log, alert, and completely block network traffic that matches specific rules.
You can deploy Suricata on a gateway host in a network to scan all incoming and outgoing network traffic from other systems, or you can run it locally on individual machines in either mode.
In this tutorial you will learn how to install Suricata, and how to customize some of its default settings on Ubuntu 20.04 to suit your needs. You will also learn how to download existing sets of signatures (usually referred to as rulesets) that Suricata uses to scan network traffic. Finally you’ll learn how to test whether Suricata is working correctly when it detects suspicious requests and data in a response.
Depending on your network configuration and how you intend to use Suricata, you may need more or less CPU and RAM for your server. Generally, the more traffic you plan to inspect the more resources you should allocate to Suricata. In a production environment plan to use at least 2 CPUs and 4 or 8GB of RAM to start with. From there you can scale up resources according to Suricata’s performance and the amount of traffic that you need to process.
If you plan to use Suricata to protect the server that it is running on, you will need:
Otherwise, if you plan to use Suricata on a gateway host to monitor and protect multiple servers, you will need to ensure that the host’s networking is configured correctly.
If you are using DigitalOcean you can follow this guide on How to Configure a Droplet as a VPC Gateway. Those instructions should work for most Ubuntu servers as well.
To get started installing Suricata, you will need to add the Open Information Security Foundation’s (OISF) software repository information to your Ubuntu system. You can use the add-apt-repository
command to do this.
Run the following command to add the repository to your system and update the list of available packages:
- sudo add-apt-repository ppa:oisf/suricata-stable
Press ENTER
when you are prompted to confirm that you want to add the repository. The command will update the list of available packages for you after it adds the new repository.
Now you can install the suricata
package using the apt
command:
- sudo apt install suricata
Now that the package is installed, enable the suricata.service
so that it will run when your system restarts. Use the systemctl
command to enable it:
- sudo systemctl enable suricata.service
You should receive output like the following indicating the service is enabled:
Outputsuricata.service is not a native service, redirecting to systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable suricata
Before moving on to the next section of this tutorial, which explains how to configure Suricata, stop the service using systemctl
:
- sudo systemctl stop suricata.service
Stopping Suricata ensures that when you edit and test the configuration file, any changes that you make will be validated and loaded when Suricata starts up again.
The Suricata package from the OISF repositories ships with a configuration file that covers a wide variety of use cases. The default mode for Suricata is IDS mode, so no traffic will be dropped, only logged. Leaving this mode set to the default is a good idea as you learn Suricata. Once you have Suricata configured and integrated into your environment, and have a good idea of the kinds of traffic that it will alert you about, you can opt to turn on IPS mode.
However, the default configuration still has a few settings that you may need to change depending on your environment and needs.
Suricata can include a Community ID field in its JSON output to make it easier to match individual event records to records in datasets generated by other tools.
If you plan to use Suricata with other tools like Zeek or Elasticsearch, adding the Community ID now is a good idea.
To enable the option, open /etc/suricata/suricata.yaml
using nano or your preferred editor:
- sudo nano /etc/suricata/suricata.yaml
Find line 120 which reads # Community Flow ID
. If you are using nano
type CTRL+_
and then 120
when prompted to enter a line number. Below that line is the community-id
key. Set it to true
to enable the setting:
. . .
# Community Flow ID
# Adds a 'community_id' field to EVE records. These are meant to give
# records a predictable flow ID that can be used to match records to
# output of other tools such as Zeek (Bro).
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
# enable/disable the community id feature.
community-id: true
. . .
Now when you examine events, they will have an ID like 1:S+3BA2UmrHK0Pk+u3XH78GAFTtQ=
that you can use to correlate records across different NMS tools.
Save and close the /etc/suricata/suricata.yaml
file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
You may need to override the default network interface or interfaces that you would like Suricata to inspect traffic on. The configuration file that comes with the OISF Suricata package defaults to inspecting traffic on a device called eth0
. If your system uses a different default network interface, or if you would like to inspect traffic on more than one interface, then you will need to change this value.
To determine the device name of your default network interface, you can use the ip
command as follows:
- ip -p -j route show default
The -p
flag formats the output to be more readable, and the -j
flag prints the output as JSON.
You should receive output like the following:
Output[ {
"dst": "default",
"gateway": "203.0.113.254",
"dev": "eth0",
"protocol": "static",
"flags": [ ]
} ]
The dev
line indicates the default device. In this example output, the device is the highlighted eth0
interface. Your output may show a device name like ens...
or eno...
. Whatever the name is, make a note of it.
Now you can edit Suricata’s configuration and verify or change the interface name. Open the /etc/suricata/suricata.yaml
configuration file using nano
or your preferred editor:
- sudo nano /etc/suricata/suricata.yaml
Scroll through the file until you come to a line that reads af-packet:
around line 580. If you are using nano
you can also go to the line directly by entering CTRL+_
and typing the line number. Below that line is the default interface that Suricata will use to inspect traffic. Edit the line to match your interface like the highlighted example that follows:
# Linux high speed capture support
af-packet:
- interface: eth0
# Number of receive threads. "auto" uses the number of cores
#threads: auto
# Default clusterid. AF_PACKET will load balance packets based on flow.
cluster-id: 99
. . .
If you want to inspect traffic on additional interfaces, you can add more - interface: eth...
YAML objects. For example, to add a device named enp0s1
, scroll down to the bottom of the af-packet
section to around line 650. To add a new interface, insert it before the -interface: default
section like the following highlighted example:
# For eBPF and XDP setup including bypass, filter and load balancing, please
# see doc/userguide/capture-hardware/ebpf-xdp.rst for more info.
- interface: enp0s1
cluster-id: 98
- interface: default
#threads: auto
#use-mmap: no
#tpacket-v3: yes
Be sure to choose a unique cluster-id
value for each - interface
object.
Keep your editor open and proceed to the next section where you will configure live rule reloading. If you do not want to enable that setting then you can save and close the /etc/suricata/suricata.yaml
file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
Suricata supports live rule reloading, which means you can add, remove, and edit rules without needing to restart the running Suricata process. To enable the live reload option, scroll to the bottom of the configuration file and add the following lines:
. . .
detect-engine:
- rule-reload: true
With this setting in place, you will be able to send the SIGUSR2
system signal to the running process, and Suricata will reload any changed rules into memory.
A command like the following will notify the Suricata process to reload its rulesets, without restarting the process:
- sudo kill -usr2 $(pidof suricata)
The $(pidof suricata)
portion of the command invokes a subshell, and finds the process ID of the running Suricata daemon. The beginning sudo kill -usr2
part of the command uses the kill
utility to send the SIGUSR2
signal to the process ID that is reported back by the subshell.
You can use this command any time you run suricata-update
or when you add or edit your own custom rules.
Save and close the /etc/suricata/suricata.yaml
file. If you are using nano
, you can do so with CTRL+X
, then Y
and ENTER
to confirm.
At this point in the tutorial, if you were to start Suricata, you would receive a warning message like the following in the logs that there are no loaded rules:
Output<Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules
By default the Suricata package includes a limited set of detection rules (in the /etc/suricata/rules
directory), so turning Suricata on at this point would only detect a limited amount of bad traffic.
Suricata includes a tool called suricata-update
that can fetch rulesets from external providers. Run it as follows to download an up to date ruleset for your Suricata server:
- sudo suricata-update
You should receive output like the following:
Output19/10/2021 -- 19:31:03 - <Info> -- Using data-directory /var/lib/suricata.
19/10/2021 -- 19:31:03 - <Info> -- Using Suricata configuration /etc/suricata/suricata.yaml
19/10/2021 -- 19:31:03 - <Info> -- Using /etc/suricata/rules for Suricata provided rules.
. . .
19/10/2021 -- 19:31:03 - <Info> -- No sources configured, will use Emerging Threats Open
19/10/2021 -- 19:31:03 - <Info> -- Fetching https://rules.emergingthreats.net/open/suricata-6.0.3/emerging.rules.tar.gz.
100% - 3044855/3044855
. . .
19/10/2021 -- 19:31:06 - <Info> -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 31011; enabled: 23649; added: 31011; removed 0; modified: 0
19/10/2021 -- 19:31:07 - <Info> -- Writing /var/lib/suricata/rules/classification.config
19/10/2021 -- 19:31:07 - <Info> -- Testing with suricata -T.
19/10/2021 -- 19:31:32 - <Info> -- Done.
The highlighted lines indicate suricata-update
has fetched the free Emerging Threats ET Open Rules, and saved them to Suricata’s /var/lib/suricata/rules/suricata.rules
file. It also indicates the number of rules that were processed, in this example, 31011 were added and of those 23649 were enabled.
The suricata-update
tool can fetch rules from a variety of free and commercial ruleset providers. Some rulesets like the ET Open set that you already added are available for free, while others require a paid subscription.
You can list the default set of rule providers using the list-sources
flag to suricata-update
like this:
- sudo suricata-update list-sources
You will receive a list of sources like the following:
Output. . .
19/10/2021 -- 19:27:34 - <Info> -- Adding all sources
19/10/2021 -- 19:27:34 - <Info> -- Saved /var/lib/suricata/update/cache/index.yaml
Name: et/open
Vendor: Proofpoint
Summary: Emerging Threats Open Ruleset
License: MIT
. . .
For example, if you wanted to include the tgreen/hunting
ruleset, you could enable it using the following command:
- sudo suricata-update enable-source tgreen/hunting
Then run suricata-update
again and the new set of rules will be added, in addition to the existing ET Open rules and any others that you have downloaded.
Now that you have edited Suricata’s configuration file to include the optional Community ID, specify the default network interface, and enabled live rule reloading, it is a good idea to test the configuration.
Suricata has a built-in test mode that will check the configuration file and any included rules for validity. Validate your changes from the previous section using the -T
flag to run Suricata in test mode. The -v
flag will print some additional information, and the -c
flag tells Suricata where to find its configuration file:
- sudo suricata -T -c /etc/suricata/suricata.yaml -v
The test can take some time depending on the amount of CPU you have allocated to Suricata and the number of rules that you have added, so be prepared to wait for a minute or two for it to complete.
With the default ET Open ruleset you should receive output like the following:
Output21/10/2021 -- 15:00:40 - <Info> - Running suricata under test mode
21/10/2021 -- 15:00:40 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:00:40 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:00:40 - <Info> - fast output device (regular) initialized: fast.log
21/10/2021 -- 15:00:40 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:00:40 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:00:46 - <Info> - 1 rule files processed. 23879 rules successfully loaded, 0 rules failed
21/10/2021 -- 15:00:46 - <Info> - Threshold config parsed: 0 rule(s) found
21/10/2021 -- 15:00:47 - <Info> - 23882 signatures processed. 1183 are IP-only rules, 4043 are inspecting packet payload, 18453 inspect application layer, 107 are decoder event only
21/10/2021 -- 15:01:13 - <Notice> - Configuration provided was successfully loaded. Exiting.
21/10/2021 -- 15:01:13 - <Info> - cleaning up signature grouping structure... complete
If there is an error in your configuration file, then the test mode will generate a specific error code and message that you can use to help troubleshoot. For example, including a rules file that does not exist called test.rules
would generate an error like the following:
Output21/10/2021 -- 15:10:15 - <Info> - Running suricata under test mode
21/10/2021 -- 15:10:15 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode
21/10/2021 -- 15:10:15 - <Info> - CPUs/cores online: 2
21/10/2021 -- 15:10:15 - <Info> - eve-log output device (regular) initialized: eve.json
21/10/2021 -- 15:10:15 - <Info> - stats output device (regular) initialized: stats.log
21/10/2021 -- 15:10:21 - <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/test.rules
With that error you could then edit your configuration file to include the correct path, or fix invalid variables and configuration options.
Once your Suricata test mode run completes successfully you can move to the next step, which is starting Suricata in daemon mode.
Now that you have a valid Suricata configuration and ruleset, you can start the Suricata server. Run the following systemctl
command:
- sudo systemctl start suricata.service
You can examine the status of the service using the systemctl status
command:
- sudo systemctl status suricata.service
You should receive output like the following:
Output● suricata.service - LSB: Next Generation IDS/IPS
Loaded: loaded (/etc/init.d/suricata; generated)
Active: active (running) since Thu 2021-10-21 18:22:56 UTC; 1min 57s ago
Docs: man:systemd-sysv-generator(8)
Process: 22636 ExecStart=/etc/init.d/suricata start (code=exited, status=0/SUCCESS)
Tasks: 8 (limit: 2344)
Memory: 359.2M
CGroup: /system.slice/suricata.service
└─22656 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -vvv
Oct 21 18:22:56 suricata systemd[1]: Starting LSB: Next Generation IDS/IPS...
Oct 21 18:22:56 suricata suricata[22636]: Starting suricata in IDS (af-packet) mode... done.
Oct 21 18:22:56 suricata systemd[1]: Started LSB: Next Generation IDS/IPS.
As with the test mode command, it will take Suricata a minute or two to load and parse all of the rules. You can use the tail
command to watch for a specific message in Suricata’s logs that indicates it has finished starting:
- sudo tail -f /var/log/suricata/suricata.log
You will receive a number of lines of output, and the terminal may appear to be stuck while Suricata loads. Continue waiting for output until you receive a line like the following:
Output19/10/2021 -- 19:22:39 - <Info> - All AFP capture threads are running.
This line indicates Suricata is running and ready to inspect traffic. You can exit the tail
command using CTRL+C
.
Now that you have verified that Suricata is running, the next step in this tutorial is to check whether Suricata detects a request to a test URL that is designed to generate an alert.
The ET Open ruleset that you downloaded contains over 30000 rules. A full explanation of how Suricata rules work, and how to construct them is beyond the scope of this introductory tutorial. A subsequent tutorial in this series will explain how rules work and how to build your own.
For the purposes of this tutorial, testing whether Suricata is detecting suspicious traffic with the configuration that you generated is sufficient. The Suricata Quickstart recommends testing the ET Open rule with number 2100498
using the curl
command.
Run the following to generate an HTTP request, which will return a response that matches Suricata’s alert rule:
- curl http://testmynids.org/uid/index.html
The curl
command will output a response like the following:
Outputuid=0(root) gid=0(root) groups=0(root)
This example response data is designed to trigger an alert, by pretending to return the output of a command like id
that might run on a compromised remote system via a web shell.
Now you can check Suricata’s logs for a corresponding alert. There are two logs that are enabled with the default Suricata configuration. The first is in /var/log/suricata/fast.log
and the second is a machine readable log in /var/log/suricata/eve.log
.
/var/log/suricata/fast.log
To check for a log entry in /var/log/suricata/fast.log
that corresponds to your curl
request use the grep
command. Using the 2100498
rule identifier from the Quickstart documentation, search for entries that match it using the following command:
- grep 2100498 /var/log/suricata/fast.log
If your request used IPv6, then you should receive output like the following, where 2001:DB8::1
is your system’s public IPv6 address:
Output10/21/2021-18:35:54.950106 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 2600:9000:2000:4400:0018:30b3:e400:93a1:80 -> 2001:DB8::1:34628
If your request used IPv4, then your log should have a message like this, where 203.0.113.1
is your system’s public IPv4 address:
Output10/21/2021-18:35:57.247239 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 204.246.178.81:80 -> 203.0.113.1:36364
Note the highlighted 2100498
value in the output, which is the Signature ID (sid
) that Suricata uses to identify a rule.
/var/log/suricata/eve.log
Suricata also logs events to /var/log/suricata/eve.log
(nicknamed the EVE log) using JSON to format entries.
The Suricata documentation recommends using the jq
utility to read and filter the entries in this file. Install jq
if you do not have it on your system using the following apt
command:
- sudo apt install jq
Once you have jq
installed, you can filter the events in the EVE log by searching for the 2100498
signature with the following command:
- jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json
The command examines each JSON entry and prints any that have an alert
object, with a signature_id
key that matches the 2100498
value that you are searching for. The output will resemble the following:
Output{
"timestamp": "2021-10-21T19:42:47.368856+0000",
"flow_id": 775889108832281,
"in_iface": "eth0",
"event_type": "alert",
"src_ip": "203.0.113.1",
"src_port": 80,
"dest_ip": "147.182.148.159",
"dest_port": 38920,
"proto": "TCP",
"community_id": "1:XLNse90QNVTgyXCWN9JDovC0XF4=",
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2100498,
"rev": 7,
"signature": "GPL ATTACK_RESPONSE id check returned root",
"category": "Potentially Bad Traffic",
. . .
}
Note the highlighted "signature_id": 2100498,
line, which is the key that jq
is searching for. Also note the highlighted "community_id": "1:XLNse90QNVTgyXCWN9JDovC0XF4=",
line in the JSON output. This key is the generated Community Flow Identifier that you enabled in Suricata’s configuration file.
Each alert will generate a unique Community Flow Identifier. Other NMS tools can also generate the same identifier to enable cross-referencing a Suricata alert with output from other tools.
A matching log entry in either log file means that Suricata successfully inspected the network traffic, matched it against a detection rule, and generated an alert for subsequent analysis or logging. A future tutorial in this series will explore how to send Suricata alerts to a Security Information Event Management (SIEM) system for further processing.
Once you have alerts set up and tested, you can choose how you want to handle them. For some use cases, logging alerts for auditing purposes may be sufficient; or you may prefer to take a more active approach to blocking traffic from systems that generate repeated alerts.
If you would like to block traffic based on the alerts that Suricata generates, one approach is to use entries from the EVE log and then add firewall rules to restrict access to your system or systems. You can use the jq
tool to extract specific fields from an alert, and then add UFW or IPtables rules to block requests.
Again, this example is a hypothetical scenario using deliberately crafted request and response data. Your knowledge of the systems and protocols that your environment should be able to access is essential in order to determine which traffic is legitimate and which can be blocked.
In this tutorial you installed Suricata from the OISF software repositories. Installing Suricata this way ensures that you can receive updates whenever a new version of Suricata is released. After installing Suricata you edited the default configuration to add a Community Flow ID for use with other security tools. You also enabled live rule reloading, and downloaded an initial set of rules.
Once you validated Suricata’s configuration, you started the process and generated some test HTTP traffic. You verified that Suricata could detect suspicious traffic by examining both of the default logs to make sure they contained an alert corresponding to the rule you were testing.
For more information about Suricata, visit the official Suricata Site. For more details on any of the configuration options that you configured in this tutorial, refer to the Suricata User Guide.
Now that you have Suricata installed and configured, you can continue to the next tutorial in this series Understanding Suricata Signatures where you’ll explore how to write your own custom Suricata rules. You’ll learn about different ways to create alerts, or even how to drop traffic entirely, based on criteria like invalid TCP/IP packets, the contents of DNS queries, HTTP requests and responses, and even TLS handshakes.
]]>