Consul Glossary
Introduction
Navigating through the world of service networking and configuration can feel complex without a clear grasp of essential terms and ideas.
Within this Consul glossary, we explain key terms associated with service networking, configuration, and optimization. Enhance your understanding of Consul and elevate your knowledge of service management seamlessly.
Consul Terms
A
Access Control List (ACL): An Access Control List (ACL) is a set of permissions that define which users and groups can access an object, such as a file or folder, and what actions they can perform. Consul utilizes ACLs to secure its user interface, API, command-line interface, service communications, and agent communications.
Agent: The Consul agent, a persistent process on each Consul cluster member, is initiated with the command consul agent
. It can function as a client or server. While nodes are typically labeled as client or server based on their agent, there are additional agent variations. All agents support DNS and HTTP interfaces, manage checks, and maintain service consistency.
API Gateway: An Application Programming Interface (API) is a standardized software interface that enables communication between applications. Most modern applications are built using APIs. An API Gateway serves as a single entry point into these API-driven applications.
Application Security: Application Security involves ensuring the security of applications by identifying and addressing potential threats or data vulnerabilities. This process can occur during or after the application development phase, but integrating security measures early on, even before development starts, is more efficient for both app and security teams.
Application Services: Application Services refer to a collection of services required to deploy, run, and enhance applications, such as application performance monitoring, load balancing, service discovery, service proxy, security, and autoscaling.
Authentication and Authorization (AuthN and AuthZ): Authentication (AuthN) involves verifying user identity, while Authorization (AuthZ) determines whether to grant or deny access based on that user's identity.
Auto Scaling Groups: An Auto Scaling Group is an AWS-specific concept that represents a collection of Amazon EC2 instances treated as a single logical unit for the purposes of automatic scaling and management.
Autoscaling: Autoscaling is the process of automatically adjusting computational resources based on network traffic demands. This can be done through horizontal scaling, which adds more machines to the resource pool, or vertical scaling, which increases the capacity of existing machines.
B
Blue-Green Deployments: Blue-Green Deployment is a deployment strategy aimed at minimizing downtime by maintaining two identical production environments known as Blue and Green. In this setup, Blue serves as the active environment while Green remains inactive.
C
Canary Deployments: Canary deployment is a pattern used to gradually roll out updates to a subset of users or servers. The goal is to deploy the changes to a small group, test them, and then gradually expand the rollout to all users.
Client: A client Consul agent forwards all remote procedure calls (RPCs) to a server agent. The client agent is relatively stateless, with minimal background activity. Its main task is participating in the local area network (LAN) gossip pool, which has a low resource overhead and network bandwidth consumption.
Client-side Load Balancing: Client-side load balancing is a load balancing method where clients are responsible for selecting the appropriate servers to call. This approach is integrated into the client application itself. Servers can also have their own load balancer in addition to the client-side load balancer.
Cloud Native Computing Foundation: Established in 2015 as a project under the Linux Foundation, the Cloud Native Computing Foundation (CNCF) aims to promote the development of container technology and foster industry-wide collaboration on its advancement. HashiCorp's decision to join the CNCF was driven by the desire to enhance product integrations with CNCF projects and engage more closely with the expansive community of cloud engineers focused on cloud-native technologies.
Consensus: In this documentation, we define consensus as agreement on both selecting the leader and arranging transaction order. As these transactions affect a finite-state machine, our consensus concept ensures the consistency of a replicated state machine.
Custom Resource Definition (CRD): Custom resources are extensions to the Kubernetes API. A Custom Resource Definition (CRD) file enables users to define their own custom resources, allowing the Kubernetes API server to manage the lifecycle of these custom resources.
D
Datacenter: We characterize a datacenter as a private network setting with low latency and high bandwidth, excluding communication over the public internet. In our context, multiple availability zones within a single EC2 region are treated as a unified datacenter.
E
Egress Traffic: Egress traffic refers to network traffic originating within a network and passing through its routers to reach a destination outside the network.
Elastic Provisioning: Elastic Provisioning is the capability to dynamically allocate computing resources in response to user demand.
Elastic Scaling: Elastic Scaling refers to the capability to automatically provision or deprovision compute or networking resources in response to fluctuations in application traffic patterns.
Envoy Proxy: Envoy Proxy is a contemporary, high-performance, and lightweight edge and service proxy. Initially developed and deployed by Lyft, Envoy Proxy is now an official project under the Cloud Native Computing Foundation (CNCF).
F
Forward Proxy: A forward proxy is used to forward outgoing requests from within a network to the internet, often through a firewall. The primary goals are to provide a layer of security and to optimize network traffic.
G
Gossip: Consul utilizes Serf as its foundation, leveraging its comprehensive gossip protocol for various functions. Serf offers capabilities such as membership management, failure detection, and event dissemination.
H
Hybrid Cloud Architecture: A hybrid cloud architecture combines on-premises, private cloud, and public cloud services within an IT framework. This approach enables workload portability, orchestration, and management across these diverse environments.
- A private cloud, typically located on-premises, is an infrastructure environment managed directly by the user.
- A public cloud, typically off-premises, is an infrastructure service offered by a third-party provider.
I
Identity-based authorization: Identity-based authorization is a security approach that grants or denies access based on the authenticated identity of a user or entity.
Infrastructure as a Service: Infrastructure as a Service (IaaS) is a cloud computing model where computing resources are provided online through APIs. These APIs interact with the underlying infrastructure, including physical computing resources, location, data partitioning, scaling, security, backup, and other fundamental components. IaaS is one of the four primary cloud service models, alongside Software as a Service (SaaS), Platform as a Service (PaaS), and Serverless computing.
Infrastructure as Code:Infrastructure as Code (IaC) is the practice of provisioning and managing computing resources automatically through software, rather than relying on manual configuration tools. This approach enables developers and operations teams to manage infrastructure programmatically.
Ingress Controller: Within Kubernetes, an 'ingress' is an entity that facilitates external access to Kubernetes services from beyond the Kubernetes cluster. An ingress controller is tasked with managing ingress, typically utilizing a load balancer or edge router to assist in traffic handling.
Ingress Gateway: An Ingress Gateway is an edge load balancer within a service mesh that enables secure and reliable access from external networks to Kubernetes clusters.
Ingress Traffic:Ingress traffic refers to network traffic that originates from outside a network and is destined for a destination within that network.
K
Key-Value Store: A Key-Value Store, also known as a Key-Value Database, is a data model where each unique key is associated with a single corresponding value within a collection.
L
L4 - L7 Services: L4-L7 Services encompass a range of functions including load balancing, web application firewalls, service discovery, and monitoring designed for network layers within the Open Systems Interconnection (OSI) model.
LAN Gossip: The 'LAN gossip pool' refers to the set of nodes located on the same local area network or within the same datacenter, participating in the gossip protocol.
Layer 7 Observability: Layer 7 Observability is a feature of Consul Service Mesh that provides a unified workflow for collecting metrics, tracking distributed data, and logging. Additionally, it enables centralized configuration and management for the distributed data plane.
Load Balancer: A load balancer is a network device that functions as a reverse proxy, distributing network and application traffic across multiple servers.
Load Balancing: Load Balancing is the practice of distributing network and application traffic across multiple servers.
Load Balancing Algorithms: Load balancers utilize algorithms to decide how to distribute traffic among servers in the server farm. Some frequently employed algorithms include:
- Round Robin
- Least Connections
- Weighted Connections
- Source IP Hash
- Least Response Time Method
- Least Bandwidth Method
M
Microservice Segmentation: Microservice segmentation, which can include visual representation, refers to the partitioning of a microservices-based application architecture. This enables administrators to visualize the individual microservices and their interactions.
Multi-cloud: A multi-cloud setup typically involves integrating two or more cloud computing services from various providers within a unified architecture. This setup involves spreading compute resources, storage, and networking components across multiple cloud environments. A multi-cloud environment can consist of entirely private cloud services, entirely public cloud services, or a mix of both.
Multi-cloud Networking: Multi-cloud Networking enables network configuration and management across multiple cloud providers through the use of APIs.
Mutual Transport Layer Security (mTLS): Mutual Transport Layer Security, commonly referred to as mTLS, is an authentication method that guarantees bidirectional network traffic security between a client and server.
N
Network Middleware Automation: Network Middleware Automation is the process of updating service changes to network components like load balancers and firewalls, as well as automating network-related tasks.
Network security: Network security involves safeguarding data and networks through a combination of policies and practices. These measures are implemented to deter and detect unauthorized access, misuse, alteration, or disruption of computer networks and their resources.
Network traffic management: Network Traffic Management involves maintaining peak network performance through the utilization of various network monitoring tools. It also encompasses traffic management strategies like bandwidth monitoring, deep packet inspection, and application-based routing.
Network Visualization: Network Visualization is the practice of graphically representing networks and their interconnected elements using a diagram format that typically includes boxes and lines. Within the realm of microservices architecture, visualization offers a comprehensive view of service interconnections, service-to-service communication, and the resource consumption of individual services.
O
Observability: Observability is the practice of logging, monitoring, and generating alerts based on the events occurring within a deployment or instance.
P
Platform as a Service: Platform as a Service (PaaS) is a cloud computing model that enables users to develop, execute, and manage applications without the complexity of building and maintaining the underlying infrastructure typically required for application development and deployment.
R
RPC: 'Remote Procedure Call' (RPC) is a request-response protocol enabling a client to send a request to a server and receive a response.
Reverse Proxy: A reverse proxy manages incoming requests from external sources to the internal network. Reverse proxies offer a layer of security by preventing direct external client access to data on corporate servers. The reverse proxy is typically positioned between the web server and incoming external traffic.
Role-based Access Controls: The process of limiting or granting access to a user based on their designated role within the organization.
S
Server: A server agent has a broader range of tasks, such as being part of the Raft quorum, managing cluster state, handling RPC requests, sharing WAN gossip across datacenters, and routing queries to leaders or remote datacenters.
Server side load balancing: A Server-side Load Balancer is positioned between the client and the server farm. It receives incoming traffic and distributes it across multiple backend servers using various load balancing techniques.
Service configuration: In a microservices application architecture, a service configuration encompasses the name, description, and specific functionality of a service. The service configuration file contains the service definition.
Service Catalog: A service catalog is a structured and carefully curated compilation of services that developers can connect to their applications.
Service Discovery: Service Discovery is the process of detecting services and devices on a network. In a microservices context, service discovery is how applications and microservices locate each other on a network.
Service Mesh: A Service Mesh is the infrastructure layer that enables communication between microservices, frequently utilizing a sidecar proxy. This network of interconnected microservices comprises a microservices-based application and the interactions between them.
Service Networking: Service Networking integrates various components to provide a specific service. It serves as the central nervous system, orchestrating an organization's networking and monitoring activities.
Service Proxy: A service proxy is the client-side proxy component for a microservices-based application. It enables applications to send and receive messages through a proxy server.
Service Registration: Service registration is the process of informing clients and routers about the available instances of a service. Service instances are registered with a service registry when they start up and deregistered when they shut down.
Service Registry: A Service Registry is a repository of service instances and details on how to route requests to these service instances.
Service-to-service communication: Service-to-service communication, also known as inter-service communication, is the capability of a microservice application instance to interact with another instance to collaborate and manage client requests.
Software as a Service: Software as a Service (SaaS) is a software delivery model where the provider hosts the software and users access it through a subscription-based licensing arrangement.
W
WAN Gossip: The 'WAN gossip pool' consists of servers primarily located in different datacenters, which communicate with each other over the internet or wide area network.