You've probably used Envoy without knowing it. If you've deployed to Istio, used AWS App Mesh, or run Consul Connect — Envoy was the proxy doing the actual work. It's the most important piece of infrastructure in the cloud-native world that most developers never interact with directly. Let's change that.
What is Envoy?
Envoy is a high-performance, programmable L4/L7 proxy designed for modern microservice architectures. Unlike nginx or HAProxy which are configured via static config files, Envoy is designed to be dynamically configured at runtime via APIs — no restarts needed.
Envoy's Core Concepts
- Listener: A port Envoy listens on (e.g., port 8080). Accepts incoming connections.
- Filter Chain: A pipeline of filters that process the request — TLS termination, HTTP parsing, rate limiting, auth, etc.
- Route: Rules that match requests (by path, header, method) to a destination cluster.
- Cluster: A named group of backend servers. Think of it like a "service" in Kubernetes.
- Endpoint: An individual IP:port within a cluster. The actual server handling the request.
# envoy.yaml — Static configuration example
static_resources:
listeners:
- name: http_listener
address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
# Route /api/* to the API cluster
- match:
prefix: "/api"
route:
cluster: api_service
timeout: 30s
# Route everything else to frontend
- match:
prefix: "/"
route:
cluster: frontend_service
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: api_service
type: STRICT_DNS
load_assignment:
cluster_name: api_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: api-backend
port_value: 3000
- name: frontend_service
type: STRICT_DNS
load_assignment:
cluster_name: frontend_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: frontend
port_value: 80
What is xDS? (The Dynamic Control Plane)
The static config above works, but every change requires restarting Envoy. In production with thousands of Envoy instances, that's impossible. Enter xDS — a set of gRPC APIs that push configuration to Envoy dynamically.
| API | Full Name | What It Configures |
|---|---|---|
| LDS | Listener Discovery Service | Which ports to listen on |
| RDS | Route Discovery Service | How to route requests (path, headers) |
| CDS | Cluster Discovery Service | Backend service groups |
| EDS | Endpoint Discovery Service | Individual server IPs within clusters |
| SDS | Secret Discovery Service | TLS certificates and keys |
| ADS | Aggregated Discovery Service | All of the above in one stream |
# Envoy bootstrap config pointing to an xDS server
# (instead of static_resources, use dynamic_resources)
dynamic_resources:
lds_config:
resource_api_version: V3
api_config_source:
api_type: GRPC
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
transport_api_version: V3
cds_config:
resource_api_version: V3
api_config_source:
api_type: GRPC
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
transport_api_version: V3
static_resources:
clusters:
- name: xds_cluster
type: STRICT_DNS
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: control-plane.default.svc
port_value: 18000
Building a Simple xDS Server (Go)
// A minimal xDS control plane using go-control-plane
package main
import (
"context"
"log"
"net"
cluster "github.com/envoyproxy/go-control-plane/envoy/config/cluster/v3"
core "github.com/envoyproxy/go-control-plane/envoy/config/core/v3"
endpoint "github.com/envoyproxy/go-control-plane/envoy/config/endpoint/v3"
listener "github.com/envoyproxy/go-control-plane/envoy/config/listener/v3"
route "github.com/envoyproxy/go-control-plane/envoy/config/route/v3"
hcm "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/network/http_connection_manager/v3"
"github.com/envoyproxy/go-control-plane/pkg/cache/v3"
"github.com/envoyproxy/go-control-plane/pkg/server/v3"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/anypb"
discovery "github.com/envoyproxy/go-control-plane/envoy/service/discovery/v3"
)
func main() {
// Create a snapshot cache (stores the current config)
snapshotCache := cache.NewSnapshotCache(false, cache.IDHash{}, nil)
// Build the configuration
snap, _ := cache.NewSnapshot("v1",
map[string][]cache.Resource{
"type.googleapis.com/envoy.config.cluster.v3.Cluster": {
makeCluster("api_service", "api-backend", 3000),
},
},
)
snapshotCache.SetSnapshot(context.Background(), "envoy-node-1", snap)
// Start gRPC server
grpcServer := grpc.NewServer()
xdsServer := server.NewServer(context.Background(), snapshotCache, nil)
discovery.RegisterAggregatedDiscoveryServiceServer(grpcServer, xdsServer)
lis, _ := net.Listen("tcp", ":18000")
log.Println("xDS server listening on :18000")
grpcServer.Serve(lis)
}
// To add a new backend dynamically:
// 1. Update the snapshot with new endpoints
// 2. Call snapshotCache.SetSnapshot() with version "v2"
// 3. Envoy automatically picks up the change — no restart!
Who Uses Envoy + xDS?
When to Use Envoy
- Service mesh sidecar: Envoy runs alongside every service, handles mTLS, retries, circuit breaking, observability — transparently.
- API gateway: Route external traffic to internal services with rate limiting, auth, and L7 routing.
- gRPC proxy: Envoy has first-class gRPC support — load balancing, transcoding (gRPC ↔ HTTP/JSON), health checking.
- Dynamic infrastructure: When backends change frequently (Kubernetes pods scaling up/down), xDS pushes updates instantly.
- Observability backbone: Envoy emits detailed L7 metrics, distributed tracing headers (Jaeger, Zipkin), and structured access logs for every request.
Envoy is not a replacement for nginx — it's a different tool for a different era. If you have a static website, nginx is perfect. If you have 200 microservices talking to each other with dynamic routing, mTLS, and traffic shaping — Envoy is what you need. And xDS is how you control it at scale.