Splunk Glossary

Introduction

The world of monitoring and alerting systems can seem complex and demanding without a solid grasp of crucial terms and ideas.

Within the Splunk Glossary, we clarify all vital terms related to monitoring, alerting, and performance optimization. Enhance your understanding of Splunk and boost your knowledge of monitoring systems effortlessly.

Splunk Terms

A

Action: In Splunk, an action is a command that an analyst can execute manually or incorporate into an automated workflow. For instance, an action could be used to add a file, comment, or attachment to an incident or case being investigated.

Ad Hoc Search: An unscheduled search, ad hoc searches are utilized to delve into your data or construct a search gradually. Typically initiated from the Search bar, ad hoc searches can be saved as dashboard panels and scheduled reports for future reference.

Ad Hoc Search Head: A search head specifically set up for executing ad hoc searches exclusively, without involvement in scheduled searches. An ad hoc search head may function as a standalone search head or as a member of a search head cluster.

Ad Hoc Risk Entry: A one-time, manual adjustment to an object's risk score, made outside of the normal risk calculation process. Ad-hoc risk entries allow you to increase, decrease, or neutralize an object's risk level.

Adaptive Response actions: A custom alert action that adheres to the common action model is known as an Adaptive Response action. In Splunk Enterprise Security, these actions can be initiated from correlation searches or manually while reviewing a notable event in Incident Review. To create a custom Adaptive Response action, you can utilize the Splunk Add-on Builder or leverage the cim_actions.py library found in the Common Information Model Add-on.

Add-on: An add-on in Splunk refers to a software component that enhances the functionality of the Splunk platform. It can be a domain-specific add-on or a third-party add-on.

Agent: In Splunk Observability Cloud, an agent is a deployment method where an instance of the software runs with the application or on the same host as the application.

Alert: An alert in Splunk is a notification triggered by predefined conditions or thresholds. Alerts can be used to proactively monitor data and detect anomalies.

Alert Action: An action triggered by an alert, such as sending an email notification or invoking a webhook, can be referred to as an alert action. These actions may contain search metadata or specific result information in their notifications. Alert actions are knowledge objects with permissions that can be customized independently from the permissions of the alerts themselves.

Alias: A field alias is an alternative name assigned to a field, enabling you to search for events containing that field using the alias. A field can have multiple aliases, with each alias corresponding to only one field. The alias is added to the event alongside the original field, without replacing or removing it. Field aliasing is useful for normalizing field names.

Allow List: A set-based filtering rule that encompasses one or more members is known as an allow list rule. For instance, you can utilize allow list rules to instruct a forwarder on which files to ingest while monitoring directories, or with the deployment server to explicitly choose a deployment client. By combining allow list rules with deny list rules, which identify members to exclude, you can achieve precise filtering. It's important to note that deny list rules take precedence over allow list rules.

Analytic Store: An allow list rule is a filtering rule that includes one or more members in a set. For instance, you can use allow list rules to tell a forwarder which files to consume when monitoring directories, or with the deployment server to explicitly select a deployment client. By combining allow list rules with deny list rules, which specify members to exclude, you can achieve precise filtering. It's important to note that deny list rules override allow list rules.

Analytics: Analytics in Splunk refer to mathematical functions that can be applied to a collection of data points. They are used to analyze and visualize data.

Annotations: Contextual information that can enhance your risk notables in Splunk Enterprise Security, including specific cybersecurity frameworks like MITRE ATT&CK, CIS 20, or NIST Controls.

App: A tailored solution operating within the Splunk platform that bundles particular files and configurations to cater to specific requirements. An app may encompass multiple views and incorporate various knowledge objects like reports, lookups, scripted inputs, and modular inputs. Occasionally, an app relies on one or more add-ons for specialized functionality. An exmaple of an app is the Splunk Enterprise Search app.

App Key Value Store: The application key-value store (KV store) offers a method to store and retrieve data within your Splunk applications as sets of key-value pairs. This functionality allows you to oversee and preserve the status of your applications and store supplementary information.

App Manifest: The app manifest is a .manifest file created by the Packaging Toolkit to outline a Splunk app, detailing dependencies and input groups.

Archiving: The process of accumulating and preserving a collection of historical data is known as archiving. In Splunk Enterprise, you can establish an archiving policy tailored to your organization's requirements. This allows you to specify that indexed data be archived based on the size or age of an index.

Artifact: In Splunk SOAR (Cloud) or Splunk Mission Control, an artifact refers to any item that signifies risk, encompassing risk objects, threat objects, observables, assets, identities, and indicators. These artifacts can be grouped into categories known as entities, which contain similar items.

Asset: A system or device within a customer's organization that is connected to a network. Splunk Enterprise Security utilizes machine data, such as IP addresses, domain names, NetBIOS names, and machine addresses derived from assets, to provide context to systems and link them with events for the detection of potential security risks.

Attribute: A field linked to the dataset represented by a data model dataset. In the Pivot Editor, users select attributes to define visualizations such as tables and charts. Each child object in a data model inherits attributes from its parent object and can also contain additional attributes, including extracted fields, calculated fields, and fields derived from lookups.

Audit Event: An event triggered by a monitored action within Splunk Enterprise. This audit event is recorded in the audit index and can include activities such as search operations and modifications to role-based access controls.

Audit Index: The repository where audit events are housed.

Automatic Key Value Field Extraction: A form of field extraction that utilizes the KV_MODE attribute in props.conf to automatically extract fields for events linked to a particular host, source, or source type. Automatic key-value field extractions occur as the third step in the sequence of search time operations, following inline field extractions and transform field extractions but preceding field aliases. By default, automatic key-value field extractions are enabled and can be customized in props.conf. This method is one of three types of search-time field extractions.

B

Base Search: A foundational search that serves as the basis for multiple similar searches.

  • In dashboards, creating a base search for a dashboard running multiple akin searches can optimize search resources. Panels within the dashboard utilize a post-process search to further refine the results of the base search.
  • In an SPL2 search module, you can expand or diverge from a base search by incorporating filters or commands to summarize or transform search outcomes.
  • In the realm of IT Service Intelligence (ITSI), leveraging KPI base searches allows for the sharing of a search definition across various KPIs utilizing the same data source. For instance, when multiple KPIs rely on identical sets of source events but measure different fields, establishing base searches can amalgamate these KPIs, decrease search burden, and enhance search efficiency.

Bloom Filter: A data structure employed to determine if an element belongs to a set. Bloom filters are utilized by the Splunk platform to expedite event retrieval from the index, particularly beneficial when searching for infrequent terms.

Bucket: A segment of a Splunk Enterprise index stored within a file system directory. Splunk Enterprise indexes are usually composed of multiple buckets categorized by age.

💡
The bucket search command mentioned here is distinct from the index buckets discussed above. It is designed to categorize continuous numerical values into discrete sets or bins. The bucket command is essentially a synonym for the bin command.

Bucket Fixing: In an indexer cluster, when a peer node becomes unavailable, the manager node coordinates a series of corrective actions with the remaining peers, known as bucket fixing or 'bucket fixup'. The primary objectives are to replicate buckets, index non-searchable bucket copies, and restore the cluster to a valid and complete state. Bucket fixing may also take place during data rebalancing or in a few other specific situations.

Build Event Type Utility: A utility that generates event types in real-time by analyzing a chosen event. To utilize this tool, conduct a search, identify a suitable event in the search results, and choose "Build event type" from the event menu. The Build Event Types feature enables you to experiment with various field/value combinations for the event type search, evaluate potentially valuable event types, and save those that prove effective.

C

Cache Manager: The component of an indexer responsible for overseeing the SmartStore cache. The primary objective of the cache manager is to enhance the efficiency of local storage utilization. It manages the movement of bucket copies between local and remote storage, as well as the removal of bucket copies from local storage.

Calculated Field: A field that encapsulates the result of an eval expression. Splunk search defines and appends calculated fields to events during the search process, handling them after executing search-time field extractions. This allows the eval expression at the core of the calculated field definition to leverage values from one or more previously extracted fields. Splunk search evaluates each calculated field in isolation, independent of other calculated fields. It is not possible to chain them together by utilizing one calculated field within the eval expression of another calculated field.

Capability: A field that represents the output of an eval expression. Splunk search defines and adds calculated fields to events at search-time, processing them after applying search-time field extractions. This enables the eval expression at the heart of the calculated field definition to use values from one or more previously extracted fields. Splunk search evaluates each calculated field independently, without considering other calculated fields. Chaining calculated fields by using one within the eval expression of another is not possible.

Character Set Encoding: A technique for presenting and handling language characters on computer systems. In Splunk Enterprise, all IT data sources are set to use UTF-8 encoding by default. However, an alternative character set can be specified through the CHARSET key in the props.conf configuration file.

Collection: A repository for a collection of data in an App Key Value Store, akin to a database table where each entry is identified by a unique key. Collections are specific to a particular app context.

Command Line Interface: The Splunk Enterprise command-line interface (CLI) is a text-based interface where you can execute system commands, modify configuration files, and run searches. Additionally, Splunk Enterprise offers command-line tools designed to assist in troubleshooting deployment and configuration issues.

Command Line Tool: A tool within Splunk that is executable from the command-line interface (CLI) to diagnose issues in a Splunk Enterprise setup. Instances of command-line utilities encompass btool, locktool, and signtool.

Common Infromation Model (CIM): A collection of predefined data models that can be utilized to analyze your data during searches. Within the CIM, each data model comprises a list of field names and tags that establish the fundamental elements of a specific domain. By employing the CIM, you can standardize your data to align with a universal framework, ensuring consistency by utilizing identical field names and event tags for similar events sourced from various providers or systems.

Component: One of the various Splunk Enterprise instances, each responsible for specific roles within the Splunk ecosystem, such as data ingestion or indexing.
Splunk Enterprise components encompass multiple types, categorized into two main groups:

  • Processing components: These components manages the data.
  • Management components: These components facilitate the operations of the processing components.

In a distributed setup, it is common to assign various segments of the data pipeline to distinct processing components. The following are the types of processing components available:

  • Indexer
  • Forwarder
  • Search head

Management components include:

  • license manager
  • monitoring console
  • deployment server
  • indexer cluster master node
  • search head cluster deployer

Conditional Routing: A data distribution situation in which a forwarder selectively transmits event data to receivers depending on patterns identified within the event data.

Configuration Bundle: In an indexer cluster, the master node distributes a set of shared configuration files and apps to the peer nodes. In a search head cluster, the deployer distributes a set of common configuration files and apps to the cluster members.

Configuration File: A document containing settings and configuration details for Splunk Enterprise and Splunk Cloud Platform. Known as a .conf file, configuration files are located in the following directories:

  • Default files: $SPLUNK_HOME/etc/system/default
  • Local files: $SPLUNK_HOME/etc/system/local
  • App files: $SPLUNK_HOME/etc/apps/

For configuring Splunk Enterprise, adjustments to configuration settings can be made by editing stanzas within copies of the default configuration files stored in a local directory. In contrast, configuring Splunk Cloud Platform requires the use of Splunk Web, as users of Splunk Cloud Platform do not have file system access and cannot manually edit .conf files.

Connector: A component that enables connectivity from Splunk Mission Control to an external system, such as Okta or Maxmind. Analogous to an app in Splunk SOAR (Cloud), a connector defines the actions accessible to a user or a playbook for that particular external system.

Constraint: A core element within a data model dataset, constraints serve to filter out extraneous events and refine the dataset's representation. The specifics of constraint definitions vary based on the object type. They can range from straightforward searches (root event datasets, all child datasets) to intricate searches (root search datasets) or transaction definitions (root transaction datasets). Every child object inherits the constraints of its parent object while introducing a new constraint to ensure it encapsulates a subset of the data encompassed by its parent dataset.

Contributing Event: An event that triggers the generation of an incident in Splunk Mission Control or a notable event in Splunk Enterprise Security.

Correlation Search: A correlation search is a scheduled search that enables the detection of suspicious events and patterns within your data. You can configure a correlation search to generate a notable event when its results satisfy predefined conditions. For a notable event to be created, the correlation search must return at least one event. You can examine notable events using the Incident Review dashboard in Splunk Enterprise Security and the Splunk App for PCI Compliance, or the Notable Events Review dashboard in Splunk IT Service Intelligence.

Counter Metric: A form of metric that consistently increases with each change, except when reset to zero, as seen in automobile odometers. Odometers track the distance a vehicle has traveled, with values always progressing unless reset.Counter metrics come in two varieties: periodic and accumulating.

  • Periodic counter metrics reset to zero whenever the client transmits a measurement to the server. Consequently, each data point for a periodic counter metric is distinct from others associated with the same metric.
  • Accumulating counter metrics reset to zero only when the service undergoes a reset. Each new value is added to the previous one, allowing for comparison between measurements to determine the rate of value accumulation.

D

Dashboard: A dashboard in Splunk is a visual representation of data using charts, graphs, and tables. It provides a quick overview of key metrics and trends.

Data Model: A data model in Splunk is a hierarchical structure that defines relationships between fields. It helps users navigate and analyze data more effectively.

Data Point: A data point in Splunk refers to a specific piece of data that is being monitored or analyzed. It can be a single value or a collection of values.

Default Field: A field indexed by Splunk Enterprise upon encountering your event data during searches. Among the key default fields are host, source, and source type, detailing the event's origin. Additional default fields encompass date/time fields, enhancing the searchability of event timestamps. Splunk Enterprise also incorporates default fields categorized as internal fields.

Default Group: A target group designated to receive event data from a forwarder when the data does not match any other predefined target groups. For instance, you can configure the forwarder to direct all events containing the word "error" to one target group, all events with a sourcetype of syslog to a second target group, and all remaining events to a default group.

Deployer: A Splunk Enterprise component responsible for distributing baseline configurations and apps to search head cluster members.

💡
Note: The deployer is not part of the search head cluster itself.

Deployment: A collection of interconnected Splunk Enterprise instances collaborating to operate efficiently. In a standard deployment setup, there are multiple forwarders and one or more indexers, with the forwarders transmitting data to the indexers for indexing and searching. Distributed search represents an alternative Splunk Enterprise deployment model, and a unified deployment scenario could incorporate both forwarding and distributed search functionalities. Splunk Enterprise streamlines deployment management through its deployment server technology, where a designated Splunk Enterprise instance (the deployment server) disseminates content and configuration files to numerous deployed instances (the deployment clients).

Deployment Method: A deployment method in Splunk refers to the way in which the software is installed and configured on a system. Examples include the agent and gateway methods.

Detector: A detector in Splunk Observability Cloud is a rule that defines the conditions under which an alert is triggered. It can be used to monitor specific metrics or events.

Drilldown: An interactive feature that fetches and presents supplementary details upon clicking specific visualization or search result components. When you select a table row, cell, or other element, a new search is initiated on the data within that selected element. Visualization drilldown can be enabled or disabled as needed. The default behavior and configuration options may vary depending on the type of visualization.

E

Embedded Report: A scheduled report integrated into an external webpage or HTML-based dashboard. Any scheduled report from the Reports listing page can be embedded. Embedded reports consistently show the outcomes of their most recent scheduled execution, maintaining the visualization style and formatting of the original report. These reports are view-only for visitors accessing the external webpage or dashboard where they are embedded.

Enclaves: In Threat Intelligence Management, cloud-based data repositories with stringent access controls are used to store structured or unstructured intelligence data.

Entity Zone: An abstract collection of entities, including assets and identities, united by shared attributes. Entity zones enable the organization and correlation of entities across different zones. For instance, if your company undergoes a merger, entity zones can be leveraged to segregate the IP address ranges of the merging companies and monitor any suspicious activity confined within a specific zone.

Event: An individual data unit within Splunk software, akin to a log file record or another form of data input. During indexing, data is segmented into distinct events, each assigned a timestamp, host, source, and source type. While a single event commonly aligns with a single line in your inputs, certain inputs like XML logs may contain multiline events, and some inputs may have multiple events within a single line. Upon executing a successful search, the results typically consist of events.

F

Federated Index: A custom index established on a federated search head to facilitate federated searches. Each federated index corresponds to a particular remote dataset on a federated provider. In Splunk's Federated Search, these indexes can be linked to indexes, saved searches, and data models. For example, in Federated Search for Amazon S3, federated indexes are associated with AWS Glue Data Catalog tables. It's important to note that federated indexes cannot be designated as destinations for data inputs.

Field: A field in Splunk is a key-value pair extracted from the raw data during indexing. Fields are used for searching, reporting, and visualization.

Filtering: The process of restricting a group of events or specific fields within events by applying defined criteria.

  • When it comes to forwarding, you have the ability to filter and direct events to designated indexers or queues.
  • In the realm of searching, you can formulate searches that refine search outcomes by excluding events or fields.

Various configuration files, like inputs.conf and serverclass.conf, offer attributes that allow you to establish inclusion and exclusion filtering guidelines.

Fishbucket: A directory within Splunk software that monitors the indexing progress within a file, facilitating the detection of new data additions and the resumption of indexing. The fishbucket subdirectory stores seek pointers and CRCs for indexed files.

Forwarder: A forwarder is a Splunk component that collects and forwards data to the indexing tier for storage and analysis.

Forwarding: The process of transmitting parsed or unparsed data from one Splunk Enterprise instance to another. The host initiating the data transfer is referred to as the forwarder, while the host receiving the data is known as the receiver.

G

Gateway: A gateway in Splunk Observability Cloud is a deployment method where the software runs by itself. It is used to configure the Splunk Distribution of OpenTelemetry Collector.

Generation: At any given point in time, the collection of primary bucket copies within an indexer cluster represents the current generation. When conducting searches, a search head exclusively searches within the current generation of the cluster. Each generation is uniquely identified by a generation ID.

Generation ID: The generation ID serves as a unique identifier for a specific generation in an indexer cluster. For each search, the search head utilizes the generation ID obtained from the master node to determine which peer nodes to include in the search.

H

Heavy Forwarder: A heavy forwarder, a variant of a forwarder, serves as a Splunk Enterprise instance responsible for transmitting data to either another Splunk Enterprise instance or an external system. Despite being more compact than a Splunk Enterprise indexer, a heavy forwarder maintains most indexer functionalities, with the notable exception of lacking the ability to conduct distributed searches. To optimize its size, certain services like Splunk Web can be deactivated. Distinguished from other forwarder variants, a heavy forwarder processes data before transmission, enabling it to route data based on specific criteria like event source or type. Additionally, it can locally index data while simultaneously forwarding it to another indexer.

High Performance Analytics Store: The aggregation of .tsidx file summaries employed to enhance the performance of one or multiple data models via continuous data model acceleration.

Historical Search: A search query that targets a specific time period, such as the last hour, yesterday, or a specific time frame in the past. While historical searches typically focus on past data, they can also be configured to examine events with future timestamps if the index contains them. This concept distinguishes historical searches from real-time searches, which continuously scan a time window until stopped and actively monitor incoming events as they are being indexed.

Host: An inherent field storing the host name or IP address of the network device responsible for generating an event. Every event includes a host field, which the indexer creates during indexing. The host field is utilized in searches to refine results to events originating from a particular device.

Hybrid Search: A search that merges data from both on-premises Splunk Enterprise indexers and Splunk Cloud Platform. Hybrid searches must be executed from an on-premises search head.

I

Identity: A user within the customer's network. Splunk Enterprise Security leverages machine data produced by identities to provide context for users and link them to events, enabling the identification of potential security risks.

Incident: An event triggered by a correlation search within Splunk Mission Control, serving as a security alert. Similar to a notable event in Splunk Enterprise Security (Cloud), an incident is a case for investigation. For instance, this could involve an email originating from a suspected phishing source.

Index: An index in Splunk is a repository where data is stored. It is used to speed up search performance by organizing data into smaller, more manageable units.

Index Parallelization: A customizable functionality enabling an indexer to utilize multiple sets of pipelines.

indexQueue: A data pipeline queue responsible for storing parsed events awaiting indexing. Initially, incoming data enters the parsingQueue before proceeding to the parsing pipeline for event processing. Subsequently, it transitions to the indexQueue and advances through the indexing pipeline, where Splunk software stores the events on disk. Typically, both parsing and indexing operations occur on the indexer. Nevertheless, there is an option to segregate the parsing phase to be performed on a heavy forwarder instead.

Indicator: Data element offering supplementary details regarding atypical, suspicious, or malicious cyber activities, including the time of detection and the associated risk level.

Input: The initial phase of the data pipeline involves Splunk Enterprise fetching the raw data stream from its source, dividing it into 64K blocks, and tagging each block with metadata keys. Once the data is acquired and segmented into blocks, it progresses to the subsequent stage of the pipeline, parsing. Data input can take place on either an indexer or a forwarder.

J

Job: A single occurrence of a search, pivot, or report, either in progress or completed, along with its associated output, referred to as a search artifact. The Search Job Inspector enables you to analyze any manually executed search. The Jobs page provides a list of recently run or saved jobs for further examination.

K

Key Indicator: Crucial predefined visual security metrics established for security domains in Splunk Enterprise Security. Key indicators can be viewed on dashboards, with each indicator displaying a value, trend amount, trend direction, and threshold used to determine the significance or priority of the metric.

Key Indicator Searches: In Splunk Enterprise Security, searches that generate a primary indicator, allowing you to incorporate it as a security metric on a dashboard. These key indicator searches are executed against the data models specified within Splunk Enterprise Security or the Common Information Model app. Certain key indicator searches focus on the tally of notable events.

Knowledge Bundle: The collection of knowledge objects that a search head shares with its search peers to enable them to participate in distributed searches.

Knowledge Object: A knowledge object in Splunk includes saved searches, reports, dashboards, and alerts. It helps users analyze and visualize data efficiently.

L

Load Balancing: The process in which a forwarder distributes its data stream among multiple designated receiving indexers in a predefined group. For instance, when a load-balanced group of three receiving indexers is configured, the forwarder alternates between them at set intervals to ensure each indexer receives a share of the data.

Logging Verbosity: An adjustable level that defines the type and/or amount of information an application records in a log file. Adjust the logging verbosity of an application to determine the extent and nature of details written to its log file. Splunk Enterprise offers logging verbosity levels including DEBUG, INFO, NOTICE, WARN, ERROR, CRIT, ALERT, FATAL, and EMERG, listed from most to least detailed.

Lookup: A knowledge object that enhances data by linking a specific value within an event to a field in an external data source, and appending the matched results to the original event. For example, a lookup can be used to match an HTTP status code and add a new field with a detailed description of the status. The data sources for lookup content include search results, .csv files, geospatial .kmz files, KVStore collections, and external database connections facilitated by scripts.

M

Major Breaker: A character that serves as a delimiter to separate words, phrases, or terms in event data into larger units. Examples of major breakers include spaces, commas, semicolons, question marks, parentheses, exclamation points, and quotation marks, which help to demarcate and organize the data.

Manager Node: The indexer cluster node responsible for overseeing the operations of an indexer cluster.

Metric: A metric in Splunk is a periodic measurement that is represented as a numerical value. It is the primary form of data sent into Splunk Infrastructure Monitoring.

Metric Time Series: A metric time series (MTS) in Splunk is defined by the unique combination of a metric and a set of dimensions. It is used to analyze and visualize data over time.

Minor Breaker: A character used in conjunction with major breakers to subdivide large tokens of event data into smaller units. Examples of minor breakers include periods, forward slashes, colons, dollar signs, pound signs, underscores, and percent signs, which provide additional granularity in tokenizing the data.

Module (ITSI): Modules within the Splunk IT Service Intelligence (ITSI) App are specialized Splunk apps composed of metrics, entities, and service configurations. These modules are crafted to enrich the ITSI user experience by facilitating comprehension and action based on the data derived from monitoring services within ITSI.

Module (SOAR): Enables connectivity between the Splunk SOAR platform and third-party security products and devices. Connector modules are written in Python and packaged within a Splunk SOAR app as Python modules for import.

Module (SPL2): Think of an SPL2 module as a file containing one or more interconnected SPL2 statements. These modules are beneficial for organizing a collection of related searches, custom functions, and custom data types.

Monitor: Monitoring involves observing a file, directory, script output, or network port for fresh data. It also denotes a set up data input of these types. Configuring a data input for an incoming data source instructs Splunk Enterprise to monitor that input.

Multiline Event: A multiline event is an event that extends across multiple lines. Events sourced from Apache logs and XML logs frequently exhibit multiline characteristics.

N

Non-Searchable: A non-searchable indexer cluster bucket copy contains solely the rawdata file, omitting the index files. Consequently, it cannot be searched directly. Non-searchable copies occupy less disk space compared to searchable copies, but they necessitate substantial processing to be converted into a searchable state.

Notable Event: A notable event is generated by a correlation search as an alert. It incorporates custom metadata fields to facilitate investigation of the alert conditions and track event remediation. This terminology is applicable to Splunk Enterprise Security, the Splunk App for PCI Compliance, and Splunk IT Service Intelligence.

nullQueue: A null device in Splunk Enterprise, akin to /dev/null in *nix operating systems, serves as a destination for undesired incoming events. Splunk Enterprise directs unwanted events to nullQueue for disposal during data routing and filtering.

O

Observable: An observable is data that signifies an event detected or observed on a computer system, network, or digital entity. These observables can range from malicious to benign in nature.

Orchastrating Command: An orchestrating command is a search command that influences the processing of a search without directly impacting the final search results. For instance, the noop command can be applied to a search to enable or disable optimizations that enhance search performance.Other examples of orchestrating commands include redistribute and localop. Additionally, the lookup command can function as an orchestrating command when used with the local=t argument.

Output Group: One or more indexers (receiving nodes) to which you set up a forwarder to transmit data. You establish an output group in outputs.conf using the tcpout:<target group> stanza, where you designate one or more indexers as receiving nodes. The forwarder utilizes the target group stanza to dispatch data to these receiving nodes.

P

Panel: A panel within a dashboard or form that accommodates one or more visualizations. Panels can be crafted and modified using the Dashboard Editor. There are three types of panels:

  1. Inline: Includes one or more inline searches to produce data for visualizations.
  2. Panel from a report: Derived from a search and visualization within a report.
  3. Prebuilt panel: A panel generated in Simple XML code that can be shared and reused across different dashboards.

Parsing: The parsing segment is the second stage of the data pipeline. Data enters this segment from the input segment. It is here that event processing takes place, where Splunk Enterprise analyzes data into logical components.Once data is parsed, it progresses to the next pipeline segment, indexing. Parsing of external data can occur on either an indexer or a heavy forwarder. Parsing may also take place on other components under specific circumstances:

  • Various components, such as search heads and indexer cluster master nodes, process their own internal data and perform parsing locally.
  • When a universal forwarder ingests structured data, it performs the parsing locally. The indexer does not further parse the structured data.

parsingQueue: The parsingQueue is a buffer in the data pipeline where data is temporarily stored upon entry into the system, awaiting parsing (event processing). Incoming data initially enters the parsingQueue, then proceeds to the parsing pipeline for event processing. Following this, it moves to the indexQueue and subsequently to the indexing pipeline, where the index is constructed. Typically, both parsing and indexing occur on the indexer. However, it is possible to separate the parsing stage, allowing it to take place on a heavy forwarder instead.

Peer Node: The peer node within an indexer cluster is responsible for indexing external data and replicating data from other peer nodes. In a typical indexer cluster setup, there are multiple peer nodes, with each peer node storing both external data and replicated data.

Per-result alert: The term "Per-result alert" was previously utilized to describe real-time alerts triggered on a per-result basis.

Pipe Operator: The vertical bar character "|" serves as a pipe operator to chain together a sequence (or pipeline) of search commands. The search processing language executes commands from left to right. In a pipeline of search commands, the output of the command to the left of the pipe operator is passed as input to the command to its right.

Pipeline set: A pipeline set is an instantiation of the event processing segment within the data pipeline. By enabling index parallelization, you can configure an indexer to utilize multiple pipeline sets. This can enhance data throughput and resource efficiency in certain deployments.

Pivot: A table, chart, or visualization generated from a dataset within a data model. Pivots are crafted using the Pivot Editor. Once a pivot is created, it can be saved as a report or dashboard panel. If a pivot experiences delays during initial loading, enhancing its performance can be achieved by implementing data model acceleration on its corresponding data model object.

Platform Alert: A platform alert is a saved search incorporated into the monitoring console. These alerts inform administrators of potential issues that could impact the stability of their Splunk software environment.

Playbook: A playbook is a saved sequence of actions, prompts, or manual tasks that can be executed by connectors or analysts to automate security workflows.

R

Reduced Bucket: A reduced bucket is a bucket that lacks full-size tsidx files, having undergone the tsidx reduction process.

Relative Time Modifiers: A relative time modifier is a character string that can be appended to a search (or saved search definition) to specify time ranges relative to when the search was executed, rather than based on an absolute time. For instance, using a relative time modifier like -60m indicates a time frame of "60 minutes ago.

Rest API: Utilize the Splunk REST API to interact with resources within a deployment. This includes running searches, monitoring the deployment, and managing configurations or objects. The splunkd server hosts the REST API endpoints. Each endpoint operation may have required parameters and may be subject to capability restrictions. The responses from endpoints can include values representing resource state and/or other details. The REST API supports standard CRUD (Create, Read, Update, Delete) operations through the following methods:

  • Use a GET request to retrieve information about a resource.
  • Use a POST request to create and/or update a resource.
  • Use a DELETE request to remove a resource.

Risk Object: Any entity, including assets, identities, users, or devices within your network, that generates machine data, which can be leveraged by Splunk Enterprise Security to populate lookups and provide context for identifying potential security threats.

Role: A set of permissions and capabilities that delineates a user's role within the Splunk platform. Users in the Splunk platform may be assigned one or multiple roles.

S

Search Head: The search head in Splunk is responsible for coordinating search requests, running searches, and displaying search results to users.

Span: A span in Splunk Observability Cloud is a single operation within a trace. It is used to analyze and visualize the flow of data through a system.

Splunk: When used alone, "Splunk" refers only to the company, not to any product. It is used to refer to the company's software offerings.

Splunk Software: Splunk software refers to any combination of Splunk Enterprise, Splunk Cloud Platform, any Splunk-supported apps and add-ons, and any other software produced by Splunk. It is used to refer to the company's software offerings.

Splunk Platform: The Splunk platform refers to both Splunk Enterprise and Splunk Cloud Platform. It is used to refer to the company's software offerings.

Stack Trace: A stack trace in Splunk is the data structure used by a machine to keep track of which methods are currently being called. It is used to analyze and debug software applications.

Splunk Distribution of OpenTelemetry Collector: The Splunk Distribution of OpenTelemetry Collector is a package that bundles the Splunk Distribution of OpenTelemetry metrics and logs for a specific platform. It is used to configure the collector.

Splunk Observability Cloud: Splunk Observability Cloud is a cloud-only product suite that is separate from the Splunk platform product offerings. It is used to provide transparent usage data with detailed reports on all monitored hosts, containers, and metrics.

Splunk Cloud Platform: Splunk Cloud Platform is a product offering from Splunk that provides cloud-based data collection, indexing, and search capabilities. It is used to provide a scalable and secure way to collect and analyze data.

Splunk Enterprise: Splunk Enterprise is a product offering from Splunk that provides on-premises data collection, indexing, and search capabilities. It is used to provide a secure and customizable way to collect and analyze data.

Splunk Infrastructure Monitoring: Splunk Infrastructure Monitoring is a product offering from Splunk that provides monitoring and analytics capabilities for infrastructure and applications. It is used to monitor and analyze data from various sources.

Splunk Observability: Splunk Observability is a product offering from Splunk that provides observability capabilities for applications and services. It is used to monitor and analyze data from various sources.

Splunk Observability Cloud: Splunk Observability Cloud is a cloud-only product suite that is separate from the Splunk platform product offerings. It is used to provide transparent usage data with detailed reports on all monitored hosts, containers, and metrics.

T

Table Dataset: Can also be referred to as "tables," table datasets are a type of dataset that can be established and managed using the Table Editor, provided the Splunk Datasets Add-on is installed. Initial data for a table dataset can be sourced from a blend of indexes and source types, an existing dataset, or a search string.

Tag: A knowledge object that facilitates searching for events containing specific field values. Tags can be assigned to any field/value combination, including event types, hosts, sources, and source types, allowing for one or multiple tags to be associated with each.

Throttle: Decrease the frequency of alert triggers. Alerts may trigger frequently due to recurring results from the search. The alert's scheduling can also contribute to frequent triggering. To lessen the frequency of alert activations, you can set up a time window for result suppression or define specific field values to be returned by the search.

Time Range Picker: A feature used to choose and specify the time frame for a search within Splunk Web. In the Search and Reporting app, the time range picker is displayed as a menu located on the right side of the search bar. Additionally, custom sets of time ranges can be defined for forms in views and dashboards. The time range picker allows you to execute a search over a predefined period, such as Last 15 minutes or Yesterday. It also enables the creation of custom time ranges for searches or the establishment of a data collection window for real-time searches.By default, the time range picker is configured to Last 24 hours. This default setting focuses on recent events and restricts the search's time frame to enhance performance. It scans through the entire dataset in your index, from the oldest to the most recent events. For extensive data sets, narrowing down the search's time range can enhance its efficiency.

tsidx file: A time-series index file, also known as an index file, is a file that links each unique keyword in your data to specific event locations stored in a corresponding rawdata file. These rawdata files, along with their associated tsidx files, collectively form the contents of an index bucket.
When conducting a search, the search keywords are matched against the tsidx files to locate the relevant events in the rawdata file. To expedite searches, bloom filters are employed to narrow down the tsidx files that Splunk Enterprise needs to examine for accurate results. For data model acceleration, Splunk Enterprise generates a distinct set of tsidx files. These tsidx files serve as condensed representations of the data extracted by the data model.

U

Universal Forwarder: A universal forwarder is a lightweight Splunk Enterprise instance that sends data to another Splunk Enterprise instance or a third-party system. It contains only the essential components required for forwarding data and lacks support for Python and a user interface. In most cases, the universal forwarder is the preferred option for forwarding data to indexers. However, it forwards unparsed data, except for structured data. A heavy forwarder is necessary for routing event-based data.

User Authentication: User authentication in the Splunk platform determines access permissions and user levels. Supported methods include LDAP, scripted authentication, single sign-on (SSO), and built-in authentication."

V

Valid: In a balanced indexer cluster state, each cluster contains precisely one primary copy of every bucket. For instance, in a properly configured multisite indexer cluster, each site with search affinity has a complete set of primary bucket copies. Such a cluster can effectively manage search requests across the entire dataset.

View: In the user interface (UI), a view refers to a form control in Splunk Cloud Platform and Splunk Enterprise. Views are used to display information or manage aspects of a search or another view. They correspond to form inputs in Simple XML, UI for Dashboard Studio, HTML dashboards, and custom UDF dashboards. Views are also associated with the Search and Analytics Workspace.

Visualization: Visual depictions of search outcomes derived from inline searches, pivots, or reports. The Splunk platform offers various visualization choices such as tables, line charts, Choropleth maps, and single value displays.