Oct 14, 2023 9 min read

How To Install and Configure Elasticsearch on Rocky Linux 8

Install and configure Elasticsearch on Rocky Linux 8 with our step-by-step tutorial. It is a platform for real-time, distributed data analysis.

Install and Configure Elasticsearch on Rocky Linux 8
Table of Contents

Choose a different version or distribution

Introduction

Before we begin talking on how to install and configure Elasticsearch on Rocky Linux 8. Let’s briefly understand - What is Elasticsearch?

Built on top of Apache Lucene, Elasticsearch is a distributed, highly scalable open-source search and analytics engine. It is designed to handle large volumes of data and provide lightning-fast search capabilities. It is a platform for real-time, distributed data analysis. It is a good choice because of its usability, potent features, and scalability. Not only that, but it can be used to search various types of data, including structured, unstructured, and geospatial data.

In this tutorial, you will install and configure Elasticsearch on Rocky Linux 8. We will also address a few FAQs on how to install and configure Elasticsearch on Rocky Linux 8.

Advantages of Installing and Configuring Elasticsearch on Rocky Linux 8

1.Powerful Search and Analytics: Elasticsearch offers fast and accurate search capabilities along with advanced analytics features for real-time data analysis.

2.Scalability and Performance: Elasticsearch's distributed architecture allows for horizontal scalability, enabling you to handle large volumes of data efficiently.

3.Full-Text Search: Elasticsearch supports full-text search, making it easy to search and analyze structured and unstructured data.

4.Real-Time Data Processing: Elasticsearch's near real-time indexing and search capabilities enable you to process and analyze data as it arrives, providing up-to-date insights.

5.Ecosystem and Integration: Elasticsearch has a vibrant ecosystem with numerous plugins, libraries, and integration options, making it compatible with various tools,frameworks for data exploration.

Prerequisites

Following this tutorial requires the following:

  • A Rocky Linux 8 server with 2 GB RAM and 2 CPUs, configured with a non-root sudo user.

Due to the fact that Elasticsearch by default allots itself around 1 GB of RAM, keep in mind that you might need to enable swap in a memory-constrained environment. Depending on how many records you are producing, your Elasticsearch server will need a certain amount of CPU, RAM, and storage.

Step 1 — Installing and Configuring Elasticsearch

You must have a functional text editor installed before you can install Elasticsearch. vi is the default text editor included with Rocky Linux 8. vi is a potent text editor, but for users without much experience, it might be a little confusing. To make modifying configuration files on your Rocky Linux 8 server easier, you might wish to use a more user-friendly editor like nano:

sudo dnf install nano -y

The next step is to install Elasticsearch. Rocky's default package repositories do not contain the Elasticsearch components. As an alternative, they can participate via repositories managed by the Elasticsearch project.

To safeguard against package spoofing, all the packages are signed with the Elasticsearch signing key. Your package manager will only trust packages that have been authenticated using the key. In order to install Elasticsearch, you must import the Elasticsearch public GPG key and add the Elastic package source list.

To begin with, import the key from elastic.co using the rpm package tool:

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, so that your package manager can connect to the Elasticsearch repository, create a file called elasticsearch.repo in the /etc/yum.repos.d/ directory using nano or your preferred text editor:

sudo nano /etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

Your package manager is instructed to use the key that you downloaded to verify repository and file information for Elasticsearch packages by following the instructions in the gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch section of the file.

Save the document, then exit. When using nano, you can save your work and exit by pressing Ctrl+X, followed by Y when asked, and then Enter.

Lastly, use the dnf package manager to install Elasticsearch:

sudo dnf install --enablerepo=elasticsearch elasticsearch

When asked to confirm installation, press y.

Security autoconfiguration information and, most crucially, the automatically generated Elasticsearch admin password should be included in the output of the installation of Elasticsearch.

Output
--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : CH77_qG8ji8QCxwUCr3w
…

Take note of this password because you'll need it to create additional Elasticsearch users later on in this tutorial. Elasticsearch has been set up and is prepared for configuration.

Step 2 — Configuring Elasticsearch

The majority of Elasticsearch's configuration options are kept in its main configuration file elasticsearch.yml, which you will update to configure Elasticsearch. This file can be found in the directory /etc/elasticsearch.

Open the configuration file for Elasticsearch in nano or any text editor of your preference:

sudo nano /etc/elasticsearch/elasticsearch.yml
💡
Note: Because the configuration file for Elasticsearch is in YAML format, you must maintain the syntax for indentation. Make sure you don't include any unnecessary spaces as you edit this file.

Configuration options for your cluster, node, paths, memory, network, discovery, and gateway are provided in the elasticsearch.yml file. The majority of these options are already set up in the file, but you can modify them to suit your needs. Only the network host settings will be changed for the duration of this single-server setup.

On port 9200, Elasticsearch monitors traffic coming from everywhere. Due to the fact that Elasticsearch 8.x now by default requires authentication, this is less of a problem than it was in earlier versions. To prevent unauthorized individuals from reading your data or shutting down your Elasticsearch cluster using its [REST API]  you will likely need to restrict access from the outside to your Elasticsearch instance.

Find the line that specifies network.host to limit access, uncomment it by simply removing the # at the beginning of the line, and change its value to localhost, so it appears as follows:

. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .

Elasticsearch may listen on any interface and bound IPs by specifying localhost. You can give an IP address in place of localhost if you just want it to listen on a specific interface. Save and exit elasticsearch.yml. When using nano, you can save your work and exit by pressing Ctrl+X, followed by Y when asked, and then Enter.

These are the minimum settings you can use to get started with Elasticsearch. Elasticsearch can now be launched for the first time.

Use systemctl to launch the Elasticsearch service. Give Elasticsearch some time to launch. Otherwise, you may receive errors indicating that you are unable to connect.

sudo systemctl start elasticsearch

Run the following command to make Elasticsearch launch each time your server boots.

sudo systemctl enable elasticsearch

Now that Elasticsearch is enabled upon startup, let's move on to the next phase and talk about security.

Step 3 — Securing Elasticsearch

Anyone with access to the HTTP API can manage Elasticsearch. Given that you have previously set Elasticsearch to only listen on localhost and that it comes with an admin password by default in Elasticsearch 8+, this is not inherently a security problem.

You can use firewalld to restrict network exposure when you need to provide remote access to the HTTP API. You can make a firewall profile that opens or limits port 9200 if you need specific outside access because Elasticsearch operates on port 9200.

Elasticsearch offers the Shield plugin as a commercial option if you want to invest in more security.

Step 4 — Testing Elasticsearch

Elasticsearch should already be operating at this point on port 9200. You may test it by using curl to send a simple HTTP GET request to localhost:9200. As HTTPS authentication is now required by default for the Elasticsearch API as of version 8.x, you can include the given certificate in the request by using the --cacert argument. Lastly, supply the default admin username, elastic, using the -u elastic argument.

sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200

The admin password that you were given upon installation will be required. You should receive the following response upon authentication:

Output
{
  "name" : "elasticrocky",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_hb4dLuuR-ipiloXHT_AMw",
  "version" : {
    "number" : "8.5.3",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "4ed5ee9afac63de92ec98f404ccbed7d3ba9584e",
    "build_date" : "2022-12-05T18:22:22.226119656Z",
    "build_snapshot" : false,
    "lucene_version" : "9.4.2",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

If you get a response that resembles the one shown above, Elasticsearch is operating correctly. If not, check to see if you correctly followed the installation instructions and gave Elasticsearch enough time to start up.

Try querying the _nodes endpoint to do a more thorough check on Elasticsearch, and add ?pretty to the end of the query to receive text formatting that can be read by humans:

sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200/_nodes?pretty
[secondary label Output]
{
  "_nodes" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "cluster_name" : "elasticsearch",
  "nodes" : {
    "7TgeSgV2Tma0quqd6Mw6hQ" : {
…

This allows you to check all the node, cluster, application paths, module, and other current configurations.

Step 5 — Using Elasticsearch

Let's first add some data to Elasticsearch before using it. With a RESTful API, Elasticsearch may be accessed using the usual CRUD commands: create, read, update, and delete. You'll use curl once more to send data to the API, but this time you'll use -X PUT to submit a PUT request rather than a GET request, and you'll use -d to add some JSON-formatted data to the command line.

You can start by adding the following entry:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X PUT "https://localhost:9200/test/_doc/1?pretty" -k -H 'Content-Type: application/json' -d '{"counter" : 1, "tags" : ["red"]}'

You should get the following response:

Output
{
  "_index" : "test",
  "_id" : "1",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 0,
  "_primary_term" : 1
}

You have made an HTTP PUT request to the Elasticsearch server using cURL. The request has the URI /test/_doc/1 with the following parameters:

  • The index of the data in Elasticsearch is test.
  • The type is _doc.
  • The ID of our entry under the above index and type is 1.

With an HTTP GET request, you can get this initial entry.

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X GET "https://localhost:9200/test/_doc/1?pretty" -k -H 'Content-Type: application/json'

This should be the output:

Output
{
  "_index" : "test",
  "_id" : "1",
  "_version" : 1,
  "_seq_no" : 0,
  "_primary_term" : 1,
  "found" : true,
  "_source" : {
    "counter" : 1,
    "tags" : [
      "red"
    ]
  }
}

Use an HTTP PUT request to change an existing entry:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic -X PUT "https://localhost:9200/test/_doc/1?pretty" -k -H 'Content-Type: application/json' -d '{"counter" : 1, "tags" : ["blue"]}'

Elasticsearch should display the following message after a successful modification:

Output
{
  "_index" : "test",
  "_id" : "1",
  "_version" : 2,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 1
}

In the example above, we changed the first entry's message to read, "Hello, People!" As a result, the version number has automatically been raised to 2.

You may have noticed the additional argument pretty in the requests above. It has formatting so that you can create new rows for each data field. Elasticsearch output is returned without line breaks or indentations when pretty is not used. Although more difficult to read in command line output, this is fine for API communication.

Now, you've added data to Elasticsearch and performed queries on it. Please refer to the API documentation to learn more about the other operations.

FAQs to Install and Configure Elasticsearch on Rocky Linux 8

Can Elasticsearch be installed on Rocky Linux 8 using the package manager? 

Yes, you can install Elasticsearch on Rocky Linux 8 using the package manager by adding the Elasticsearch repository and then installing the appropriate package.

What is the default configuration file for Elasticsearch on Rocky Linux 8? 

The default configuration file for Elasticsearch on Rocky Linux 8 is located at /etc/elasticsearch/elasticsearch.yml.

How can I configure Elasticsearch to listen on a specific IP address or hostname on Rocky Linux 8? 

In the configuration file, modify the network.host property and set it to the desired IP address or hostname.

Can I enable automatic index creation in Elasticsearch on Rocky Linux 8? 

Yes, you can enable automatic index creation by setting the action.auto_create_index property in the configuration file.

Can Elasticsearch be integrated with other tools and frameworks on Rocky Linux 8? 

Yes, Elasticsearch can be integrated with tools like Logstash for data ingestion or Kibana for data visualization. This combination is known as the Elastic Stack.

Can I upgrade Elasticsearch to a newer version on Rocky Linux 8? 

Yes, you can upgrade Elasticsearch on Rocky Linux 8. The upgrade process involves backing up the data, installing the new version, and then restoring the data.

Is it possible to secure Elasticsearch transport connections on Rocky Linux 8? 

Yes, you can secure transport connections by enabling TLS/SSL encryption in the configuration file and configuring certificates.

How can I uninstall Elasticsearch from Rocky Linux 8 if needed? 

To uninstall Elasticsearch, use the package manager and remove the Elasticsearch package. Additionally, you may need to delete the Elasticsearch data directory.

Conclusion

By following these steps, you can successfully install and configure Elasticsearch on your Rocky Linux 8 system. This allows you to leverage the power of Elasticsearch for efficient searching, indexing, and analysis of your data, enabling you to build robust and scalable search applications.

For further information about Elasticsearch's functionality, please visit the official Elasticsearch documentation.

If you have any suggestions or queries, kindly leave them in the comments section.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to DevOps Tutorials - VegaStack.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.