Jun 9, 2023 15 min read

How to Install ELK Stack on Ubuntu 20.04

Install ELK Stack on Ubuntu 20.04 with our step-by-step tutorial. It is a powerful open-source tool used for log management and data analysis.

 Install ELK Stack on Ubuntu 20.04
Table of Contents

Choose a different version or distribution

Introduction

Before we begin talking about how to install ELK on Ubuntu 20.04, let’s briefly understand - What ELK Stack is?

ELK Stack, also known as the Elastic Stack, is a powerful open-source tool used for log management and data analysis. It consists of three main components: Elasticsearch, Logstash, and Kibana.

Elasticsearch is a distributed search and analytics engine that helps to store and index large volumes of data. Logstash is a data processing pipeline that collects, filters, and transforms logs and other data sources. Kibana is a visualization tool that enables users to explore and visualize data through interactive dashboards and charts.

With ELK Stack, businesses can gain valuable insights from their data, identify patterns, troubleshoot issues, and make data-driven decisions. It is widely used in various industries for log analysis, security monitoring, and business intelligence.

In this tutorial, you will install ELK on Ubuntu 20.04. We will also address a few FAQs related to ELK Stack installation.

Advantages of ELK Stack

  1. Scalability: ELK Stack allows seamless scalability, handling large volumes of data efficiently.
  2. Real-time Analytics: Provides real-time insights and data visualization through interactive dashboards.
  3. Centralized Log Management: Collects and organizes logs from various sources in a centralized location.
  4. Powerful Search Capabilities: Elasticsearch offers robust search capabilities, enabling quick and precise data retrieval.
  5. Open-Source and Cost-effective: ELK Stack is open-source, making it cost-effective and highly customizable for diverse use cases.

Prerequisites to Install ELK Stack

1) The Ubuntu 20.04 server with 4GB RAM and 2 CPUs set up with a non-root sudo-user. You will work with the minimum amount of CPU and RAM necessary to run Elasticsearch. The Elasticsearch server will depend on the volume of logs that you expect.

2) Installation of OpenJDK 11.

3) Installation of Nginx on your server.

4) In addition, Elastic Stack has access to valuable information about the server. Therefore, it is necessary to keep your server secure by installing a TLS/SSL certificate. This is optional but a strong recommendation.

5) Also, if you want to configure Let’s Encrypt on your server, you need a fully qualified domain name (FQDN). Here, you will use your_domain throughout. You can purchase a domain name on Namecheap. Also, can get one for free on Freenom, or use the domain registrar of your choice.

6) A record with your_domain pointing to your server’s public IP address.

7) Also, record with www.your_domain pointing to your server’s public IP address.

Step 1 - Install and Configure the Elasticsearch

1) The Elasticsearch components are not available in Ubuntu’s default package repositories. You will install it with APT after adding Elastic’s package to the source list. All packages are signed with the Elasticsearch signing key. It is in order to protect your system from package spoofing.

Here, you will first import the Elasticsearch public GPG key. Then, add the Elastic package source list to install Elasticsearch.

2) Next use CURL, the command-line tool for transferring data with URLs. Further, to import the Elasticsearch public GPG key into the APT, remember you will not use the arguments -fsSL. It will silence all progress and possible errors (except for a server failure). Even, to allow CURL to make a request on a new location if it gets redirection. So, pipe the output of the CURL command into the apt-key program. It will add the public GPG key to the APT:

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

3) Then, add the Elastic source list to the sources.list.d directory. It is where APT will search for the new sources:

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

4) Next, update the package lists. It will enable APT to read the new Elastic source:

sudo apt update

5) Now, install the Elasticsearch with the below command:

sudo apt install elasticsearch

6) Elasticsearch is now complete and ready for configuration. Use your preferrence for a text editor. It will let you edit Elasticsearch’s main configuration file, elasticsearch.yml. Here, you will use nano:

sudo nano /etc/elasticsearch/elasticsearch.yml
💡
Elasticsearch’s configuration file is in YAML format. It means that you need to maintain the indentation format. Be sure not to add any extra spaces as you edit the file.

7) The elasticsearch.yml file gives configuration options. It is for the cluster, node, paths, memory, network, discovery, and also a gateway. Many options are preconfigured in the file. Still, you will be able to change them as per your needs. For a demonstration of a single-server configuration, you will adjust only the settings for network-host.

Further, the Elasticsearch listens for traffic from everywhere on the port 9200. You can restrict the outside access to your Elasticsearch instance. It will help to prevent outsiders from reading your data or shutting down the Elasticsearch cluster via REST-API. Even to restrict access and increase security, find the line mentioning network.host then, uncomment it, and replace its value with localhost like:

etc/elasticsearch/elasticsearch.yml
. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .

8) As after specifying localhost, the Elasticsearch listens on all interfaces and bound IPs. If you want to listen only on a specific interface, specify the IP instead of localhost. Now, save and then close elasticsearch.yml.

9) Next, start the Elasticsearch service with the systemctl. Give Elasticsearch little time to start up otherwise, you can get errors about not being able to connect to it.

sudo systemctl start elasticsearch

10) Now, run the below command. It will enable Elasticsearch to start every time your server boots:

sudo systemctl enable elasticsearch

11) You will then test whether your Elasticsearch service is running. Do it by sending an HTTP request:

curl -X GET "localhost:9200"

12) After that, see a response indicating some basic information. It is basically about your local node, similar to:

Output
{
  "name" : "Elasticsearch",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Step 2 - Installing and Configuring Kibana Dashboard

1) You should install Kibana only after installing Elasticsearch. It will ensure the components of each product are correctly in place. Next, install the remaining components of the Elastic Stack using apt:

sudo apt install kibana

2) Then enable and start the Kibana service using the below command:

sudo systemctl enable kibana
sudo systemctl start kibana

3) The configuration of Kibana is only to listen on localhost. You will need to set up a reverse proxy to allow external access.

4) Firstly, use the openssl command. It will enable you to create an administrative Kibana user. Access it using the Kibana web interface, as an example name this account as kibanaadmin. To ensure greater security we recommend choosing a non-standard name for your user so, it will be difficult to guess. The below command will create the administrative Kibana user and password. Then, store them in the htpasswd.users file. Now configure Nginx. It will need the username and password and read this file momentarily:

echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

5) Next, enter and confirm a password at the prompt. Remember this login, you will need it to access the Kibana web-interface.

6) Then, create an Nginx server block file. Here, we will refer to this file as your_domain. Although, you may find it helpful giving a more descriptive name. For example, if you have an FQDN and DNS records set up for this server, you can name this file after the FQDN.

7) Then, using nano or text editor, create the Nginx server block file using the below command:

sudo nano /etc/nginx/sites-available/your_domain

8) Add the below code block into the file. Be sure to update your_domain to match your server’s FQDN or public IP-address. This code will configure Nginx to direct the server’s HTTP traffic to Kibana application. Moreover, it is also listening on localhost:5601. Also, it configures Nginx to read the htpasswd.users file. It will need basic authentication.

💡
You may have already made this file and populated it with some content as well. In that case, delete all the existing content in the file before adding the following:
/etc/nginx/sites-available/your_domain
server {
    listen 80;

    server_name your_domain;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

9) After finishing, save and close the file.

10) Then enable a new configuration by creating a symbolic-link to sites-enabled directory. If you have already created a server block-file with the same name in Nginx prerequisite. You do not need to run this command:

sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain

11) After that, check the configuration for syntax errors and remove nginx's default page:

sudo nginx -t
sudo rm /etc/nginx/sites-enabled/default

12) If any errors comes in your output, go back. Then, double-check the content you placed in your configuration file is added correctly. After, you see syntax is ok in the output. Then, go ahead and restart the Nginx service:

sudo systemctl reload nginx

13) If you follow the initial server setup , you will have a UFW firewall enabled. So, to allow connections to Nginx. You can adjust the rules by:

sudo ufw allow 'Nginx Full'

14) Kibana is now accessible from your FQDN or public IP address of your Elastic Stack server. You will be able to check the Kibana server’s status page. Do it by navigating to the below address then, enter your login credentials when you get a prompt:

http://your_domain/status

15) The status page displays the information about the server’s resource usage. Also, lists the installed plugins.

KibanaDashboard (
💡
In the Prerequisites section, it is recommended to enable SSL/TLS on your server. You can follow the Let’s Encrypt guide to obtain a free SSL certificate for Nginx. After obtaining your SSL/TLS certificate, you can then continue.

Step 3 - Installing and Configuring the Logstash

1) However, it is possible for Beats to send data directly to the Elasticsearch database. It is simple to use the Logstash to process data. This will allow you more flexibility to collect data from various sources. It will even transform it into a common-format. Then, export it to another database. You will install Logstash with follow command:

sudo apt install logstash

2) After installing Logstash, move on to configuring it. Logstash’s configuration files are in the /etc/logstash/conf.d directory. For more information on configuration syntax, check out the configuration reference. Think of Logstash as a pipeline that takes in data at one end and processes it in one way or another, then sends it to its destination. A Logstash pipeline has two necessary elements, input and output, and one optional element the filter. The input plugin consumes the data from a source, the filter plugins processes the data and the output plugins write the data to the destination.

logstash_pipeline_updated

3) Create a configuration file named,02-beats-input.conf. It is where you will set up your Filebeat input using the below command:

sudo nano /etc/logstash/conf.d/02-beats-input.conf

4) Now, insert the following input configuration. It will specify beats input that will listen on TCP-port 5044:

/etc/logstash/conf.d/02-beats-input.conf
input {
  beats {
    port => 5044
  }
}

5) Save and then close the file.

6) Then, create a configuration file known as 30-elasticsearch-output.conf:

sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

7) Next, insert the following output configuration. The output will configure Logstash to store the Beats data in Elasticsearch. It is running at the  localhost:9200. The Beat used here is Filebeat:

/etc/logstash/conf.d/30-elasticsearch-output.conf
output {
  if [@metadata][pipeline] {
    elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    pipeline => "%{[@metadata][pipeline]}"
    }
  } else {
    elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
  }
}

8) Again, save and close the file.

9) You will test your Logstash configuration with the below command:

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

10) If you have no syntax errors then your output will display Config Validation Result: OK. Exiting Logstash after a few seconds. If you do not see this, check for any errors in your output then, update your configuration to correct them. Remember, you will receive warnings from OpenJDK, but they should not cause any problems and can be ignored.

11) If the configuration test is successful then you can now start and enable Logstash. It will enable to put the configuration changes to the effect:

sudo systemctl start logstash
sudo systemctl enable logstash

Now your Logstash is running correctly and configuration is fully complete.

Step 4 - Installing and Configuring the Filebeat

The Elastic Stack uses a few lightweight data shippers known as Beats. It is helpful to collect data from various sources and transport them to the Logstash or Elasticsearch. Below are the Beats currently available from Elastic:

  • Filebeat - It collects and ships the log files.
  • Metricbeat - This collects metrics from your systems and services.
  • Packetbeat - Collect and analyze the network data.
  • Winlogbeat - Responsible to collect Windows event logs.
  • Auditbeat - Collection of the Linux audit framework data, also monitors file integrity.
  • Heartbeat - It monitors services for their availability with active-probing.

1) Install Filebeat using apt:

sudo apt install filebeat

2) Next, configure the Filebeat to connect to Logstash. Next, you will modify the example configuration file that has Filebeat. You will need to open the Filebeat configuration file:

sudo nano /etc/filebeat/filebeat.yml
💡
With Elasticsearch, Filebeat’s configuration-file is in YAML format. It means that proper indentation is crucial. So, be sure to use same number of spaces indicated in the instructions.

3) The Filebeat supports numerous outputs. Still, you usually only send events directly to Elasticsearch or to Logstash for processing. You will use Logstash to perform additional processing on data collected by Filebeat. It will not need to send any data directly to Elasticsearch so you will disable that output. For this, find the output.elasticsearch section. Then, comment out the below lines preceding them with a #:

/etc/filebeat/filebeat.yml
...
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
...

4) Next, configure the output.logstash section. Then, uncomment the lines output.logstash: and hosts: ["localhost:5044"] by removing#. This will configure the Filebeat to connect to Logstash. It will be on the Elastic Stack server at port 5044. The port for which you specified a Logstash input earlier:

/etc/filebeat/filebeat.yml
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

5) Save and then close the file. For now, you will use the system module. It will collect and parse logs, it was made by the system logging service of common Linux distributions. Now, enable it:

sudo filebeat modules enable system

6) You will see a list of enabled and disabled modules by running:

sudo filebeat modules list

You will see a list similar to:

Output

Enabled:
system

Disabled:
apache2
auditd
elasticsearch
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
traefik
...

7) By default, configuration of Filebeat is complete to use default paths for the syslog and authorization logs. Here, you will not change anything in the configuration. You will see the parameters of the module in /etc/filebeat/modules.d/system.yml configuration file.

8) Next, you will need to set up the Filebeat ingest pipelines. It will parse the log data before sending it via Logstash to Elasticsearch. So, to load the ingest pipeline for the system module, do it by entering the below command:

sudo filebeat setup --pipelines --modules system

9) Next, load the index template into Elasticsearch. The Elasticsearch index is a collection of documents having similar features. Indexes are identified with a name. It is useful to refer to the index when performing various operations within it. The index template gets applied automatically when creating a new index.

10) You will now load the template, with the following command:

sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
Output

Index setup finished.

11) Filebeat comes with sample Kibana dashboard that allows you to visualize the Filebeat data in Kibana. Before you use dashboards, you need to create the index pattern. Then, load the dashboards into Kibana. As dashboards load, Filebeat connects to Elasticsearch checking the version information. So, to load dashboards when enabling the Logstash, disable the Logstash output, then enable Elasticsearch output:

sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

You will receive output similar to:

Output

Overwriting ILM policy is disabled. Set `setup.ilm.overwrite:true` for enabling.

Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/elastic-stack-overview/current/xpack-ml.html
Loaded machine learning job configurations
Loaded Ingest pipelines

12) Now start and enable Filebeat:

sudo systemctl start filebeat
sudo systemctl enable filebeat

13) If the Elastic Stack set-up is correct. The Filebeat will begin shipping your syslog and authorization logs to Logstash. It will then load that data into the Elasticsearch.

14) Then, to verify that Elasticsearch is indeed receiving this data, you will have to query the Filebeat index with the following command:

curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

You will see the below output:

Output

...
{
{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 2,
    "successful" : 2,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 4040,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "filebeat-7.7.1-2020.06.04",
        "_type" : "_doc",
        "_id" : "FiZLgXIB75I8Lxc9ewIH",
        "_score" : 1.0,
        "_source" : {
          "cloud" : {
            "provider" : "digitalocean",
            "instance" : {
              "id" : "194878454"
            },
            "region" : "nyc1"
          },
          "@timestamp" : "2020-06-04T21:45:03.995Z",
          "agent" : {
            "version" : "7.7.1",
            "type" : "filebeat",
            "ephemeral_id" : "cbcefb9a-8d15-4ce4-bad4-962a80371ec0",
            "hostname" : "june-ubuntu-20-04-elasticstack",
            "id" : "fbd5956f-12ab-4227-9782-f8f1a19b7f32"
          },


...

15) If the output shows 0 total hits. The Elasticsearch is not loading any logs under the index you searched for. Then, you will need to review your setup for any errors. If you receive the expected output, continue to the next step.

Step 5 - Exploring the Kibana Dashboards

1) In a web browser, go to FQDN or the public IP address of your Elastic Stack server. If your session gets interrupted, you will need to re-enter the credentials. After log-in, you will receive the Kibana-homepage:

kibana-home2004

2) Now, click the Discover link in the left-hand navigation bar. You may have to click the Expand icon at the bottom left to see navigation menu items. Then, on the Discover page, select the predefined filebeat. By default, it will show you all of the log data over the last 15-minutes. Moreover, you will see a histogram with log-events, with log messages below:

kibana-syslog-filebeat2004

3) Here, you will search and browse via logs and even customize your dashboard. At this point, though, there will not be much in there as, here you are only gathering syslogs from Elastic Stack server.

4) You will next use the left-hand panel to navigate to the Dashboard page. Also,  search for the Filebeat System dashboards. Once you are there, select the sample dashboards that has Filebeat’s system module. For instance, you will view details of stats on basis of your syslog-messages:

syslogfilebeat2004

5) You can also see the users if they use sudo command and when:

kibana-sudo-2004

Kibana has many features, like graphing and filtering, therefore, feel free to explore.

FAQs to Install ELK Stack on Ubuntu 20.04

How can I start and stop ELK Stack services?

Use the following commands:

  • Elasticsearch: sudo systemctl start/stop elasticsearch
  • Logstash: sudo systemctl start/stop logstash
  • Kibana: sudo systemctl start/stop kibana

Where can I access Kibana's web interface?

Open a web browser and enter http://localhost:5601 or http://<your_server_ip>:5601.

How can I configure Elasticsearch in ELK Stack?

Edit the Elasticsearch configuration file located at /etc/elasticsearch/elasticsearch.yml to modify settings like cluster name, network host, and port.

Can I secure my ELK Stack installation?

Yes, you can secure ELK Stack by configuring SSL/TLS encryption, enabling authentication, and setting up firewall rules to restrict access.

Can I integrate ELK Stack with other tools and services?

Yes, ELK Stack offers integration with various third-party tools, including Beats for data shippers, X-Pack for advanced features, and plugins for extended functionality.

How can I create visualizations and dashboards in Kibana?

Use Kibana's user-friendly web interface to create visualizations like line charts, bar graphs, and maps. Combine them into interactive dashboards.

Conclusion

We hope this detailed guide helped you to install the ELK Stack on Ubuntu 20.04.

If you have any queries, please leave a comment below and we’ll be happy to respond to them for sure.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to DevOps Tutorials - VegaStack.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.