When your microservices talk to each other, how do you ensure that only authorized services can make those calls? API keys leak. JWTs expire and need refresh infrastructure. The most robust solution is certificate-based M2M authentication — and there's a Go library that makes it straightforward: m2mauth.

Why m2mauth?

Building mTLS from scratch in Go means dealing with TLS config, certificate loading, peer verification, and error handling yourself. The m2mauth library wraps all of this into a clean API focused specifically on service-to-service authentication.

DIY mTLS vs m2mauth Library
🚫 Rolling Your Own
📋50+ lines of TLS config boilerplate
Easy to misconfigure (skip verify, wrong ciphers)
🔄Certificate rotation logic is your problem
🐛Subtle security bugs in peer validation
VS
✅ Using m2mauth
Clean API — few lines to set up
🔒Secure defaults (TLS 1.2+, strong ciphers)
Handles certificate loading and validation
📦Open source, auditable, community-maintained

Installation

go get github.com/vishalanandl177/m2mauth

How M2M Auth Works

M2M Authentication Flow
Service A(Client)
mTLS Handshake(m2mauth)
Service B(Server)
1 Connect with client certificate
2 Server sends its certificate
Both verify certificates against shared CA
3 Encrypted request (identity proven)
4 Encrypted response ✅

Setting Up the Server

package main

import (
    "fmt"
    "log"
    "net/http"

    "github.com/vishalanandl177/m2mauth"
)

func main() {
    // Create M2M auth configuration
    config := m2mauth.Config{
        CertFile: "certs/server-cert.pem",   // Server's certificate
        KeyFile:  "certs/server-key.pem",     // Server's private key
        CAFile:   "certs/ca-cert.pem",        // CA to verify client certs
    }

    // Your HTTP handler
    mux := http.NewServeMux()
    mux.HandleFunc("/api/data", func(w http.ResponseWriter, r *http.Request) {
        // At this point, the client's certificate has been verified
        // by m2mauth — only trusted services reach this handler
        fmt.Fprintf(w, "Hello from Service B! You are authenticated.")
    })

    mux.HandleFunc("/api/health", func(w http.ResponseWriter, r *http.Request) {
        w.WriteHeader(http.StatusOK)
        fmt.Fprintf(w, "OK")
    })

    // Start mTLS server using m2mauth
    server, err := m2mauth.NewServer(config, mux)
    if err != nil {
        log.Fatalf("Failed to create M2M server: %v", err)
    }

    log.Println("M2M server listening on :8443 (mTLS required)")
    log.Fatal(server.ListenAndServeTLS(":8443"))
}

Setting Up the Client

package main

import (
    "fmt"
    "io"
    "log"
    "net/http"

    "github.com/vishalanandl177/m2mauth"
)

func main() {
    // Client M2M auth configuration
    config := m2mauth.Config{
        CertFile: "certs/client-cert.pem",   // Client's certificate
        KeyFile:  "certs/client-key.pem",     // Client's private key
        CAFile:   "certs/ca-cert.pem",        // CA to verify server cert
    }

    // Create authenticated HTTP client
    client, err := m2mauth.NewClient(config)
    if err != nil {
        log.Fatalf("Failed to create M2M client: %v", err)
    }

    // Make authenticated request — certificate is sent automatically
    resp, err := client.Get("https://localhost:8443/api/data")
    if err != nil {
        log.Fatalf("Request failed: %v", err)
    }
    defer resp.Body.Close()

    body, _ := io.ReadAll(resp.Body)
    fmt.Printf("Status: %d\n", resp.StatusCode)
    fmt.Printf("Body: %s\n", body)
    // Output:
    // Status: 200
    // Body: Hello from Service B! You are authenticated.
}

Generating Certificates

For development/testing, generate your own CA and certificates:

# Generate CA (Certificate Authority)
openssl genrsa -out certs/ca-key.pem 4096
openssl req -new -x509 -key certs/ca-key.pem -sha256 \
  -subj "/CN=My Internal CA" -days 3650 -out certs/ca-cert.pem

# Generate Server certificate
openssl genrsa -out certs/server-key.pem 4096
openssl req -new -key certs/server-key.pem \
  -subj "/CN=service-b.local" -out certs/server.csr
openssl x509 -req -in certs/server.csr \
  -CA certs/ca-cert.pem -CAkey certs/ca-key.pem \
  -CAcreateserial -days 365 -sha256 \
  -extfile <(echo "subjectAltName=DNS:localhost,IP:127.0.0.1") \
  -out certs/server-cert.pem

# Generate Client certificate
openssl genrsa -out certs/client-key.pem 4096
openssl req -new -key certs/client-key.pem \
  -subj "/CN=service-a" -out certs/client.csr
openssl x509 -req -in certs/client.csr \
  -CA certs/ca-cert.pem -CAkey certs/ca-key.pem \
  -CAcreateserial -days 365 -sha256 \
  -out certs/client-cert.pem

Using with gRPC

package main

import (
    "log"
    "google.golang.org/grpc"
    "github.com/vishalanandl177/m2mauth"
)

func main() {
    config := m2mauth.Config{
        CertFile: "certs/server-cert.pem",
        KeyFile:  "certs/server-key.pem",
        CAFile:   "certs/ca-cert.pem",
    }

    // Get TLS credentials for gRPC
    tlsCreds, err := m2mauth.NewGRPCServerCredentials(config)
    if err != nil {
        log.Fatalf("Failed to create gRPC credentials: %v", err)
    }

    // Create gRPC server with mTLS
    grpcServer := grpc.NewServer(grpc.Creds(tlsCreds))

    // Register your gRPC services here...
    // pb.RegisterMyServiceServer(grpcServer, &myService{})

    log.Println("gRPC server with mTLS on :50051")
}

Kubernetes Deployment

# Mount certificates from Kubernetes Secrets
apiVersion: apps/v1
kind: Deployment
metadata:
  name: service-b
spec:
  template:
    spec:
      containers:
        - name: service-b
          image: myregistry/service-b:latest
          ports:
            - containerPort: 8443
          volumeMounts:
            - name: tls-certs
              mountPath: /certs
              readOnly: true
      volumes:
        - name: tls-certs
          secret:
            secretName: service-b-tls

# Create the secret from cert files:
# kubectl create secret generic service-b-tls \
#   --from-file=server-cert.pem=certs/server-cert.pem \
#   --from-file=server-key.pem=certs/server-key.pem \
#   --from-file=ca-cert.pem=certs/ca-cert.pem

# For production: use cert-manager to auto-generate and rotate certs

When to Use m2mauth

  • Microservice-to-microservice: Internal APIs within your cluster where API keys aren't secure enough.
  • Zero-trust environments: Every connection must prove identity cryptographically — not just "I have the right API key."
  • Cross-cluster communication: Services in different Kubernetes clusters or VPCs that need to trust each other.
  • Compliance requirements: PCI-DSS, HIPAA, or SOC 2 often require mutual authentication for sensitive data access.

Scaling M2M Auth: SPIFFE and SPIRE

The m2mauth library is perfect for small-to-medium deployments where you manage certificates manually. But there's a fundamental problem it can't solve: the identity bootstrapping problem.

When a new pod starts in Kubernetes, how does it prove who it is? It can't show a certificate — it doesn't have one yet. It can't use a password — where would you store it securely before the pod exists? This chicken-and-egg problem is exactly what SPIFFE and SPIRE were designed to solve.

What is SPIFFE?

SPIFFE (Secure Production Identity Framework for Everyone) is not a tool — it's an open standard (a set of specifications) that defines how workloads identify themselves to each other. Think of it like how HTTPS is a standard that defines secure web connections. SPIFFE is a standard that defines secure workload identity.

SPIFFE answers three questions:

  • How do you name a workload? → SPIFFE ID (a URI)
  • How do you prove a workload's identity? → SVID (a signed document — X.509 cert or JWT)
  • How does a workload get its identity? → Workload API (a local Unix socket)

SPIFFE IDs: Naming Workloads

Every workload in a SPIFFE-enabled system has a SPIFFE ID — a URI that uniquely identifies it:

# SPIFFE ID format:
spiffe://trust-domain/path

# The trust domain is like a realm or scope:
spiffe://mycompany.com/payments/charge-service
spiffe://mycompany.com/orders/api
spiffe://staging.mycompany.com/payments/charge-service

# Real-world naming patterns:
# By namespace + service account (Kubernetes):
spiffe://prod.acme.com/ns/production/sa/payment-service

# By cluster + service:
spiffe://acme.com/cluster/us-east/service/order-api

# By environment + team + service:
spiffe://acme.com/env/prod/team/platform/service/gateway

The SPIFFE ID is embedded inside the identity document (SVID). When Service A talks to Service B, they exchange SVIDs and verify each other's SPIFFE ID — not IP addresses, not hostnames, not API keys. This is cryptographic proof of identity.

SVIDs: Proving Identity

An SVID (SPIFFE Verifiable Identity Document) is the actual proof of identity. SPIFFE supports two types:

Two Types of SPIFFE Identity Documents
X.509 SVID (Certificate)
🔒Standard X.509 certificate with SPIFFE ID in SAN
🔄Short-lived (typically 1 hour, auto-rotated)
🌐Works with ANY TLS library (no SPIFFE SDK needed)
🎯Best for: mTLS between services (most common)
VS
JWT SVID (Token)
📝Standard JWT with SPIFFE ID in sub claim
Very short-lived (typically 5 minutes)
🌐Works over HTTP headers (no mTLS needed)
🎯Best for: L7 proxies, API gateways, cross-boundary

What is SPIRE?

SPIRE (SPIFFE Runtime Environment) is the production implementation of the SPIFFE standard. If SPIFFE is the specification, SPIRE is the software you actually deploy. It has two components:

SPIRE Architecture
SPIRE Server (Control Plane)Central authority. Signs SVIDs. Stores registration entries. Manages trust bundles. Runs as a Deployment in K8s.
SPIRE Agent (Per-Node Daemon)Runs on every node (DaemonSet). Attests workloads. Caches SVIDs locally. Exposes the Workload API.
Workload API (Unix Socket)A local gRPC endpoint (/run/spire/sockets/agent.sock) that workloads call to get their SVID. No secrets needed to call it.
Workload (Your Service)Calls the Workload API on startup. Gets its SVID. Uses it for mTLS connections. Never sees a private key file.

Workload Attestation: How SPIRE Knows Who's Asking

This is the clever part — how does SPIRE know which identity to give a workload? It uses attestation: verifying properties of the workload's environment to determine its identity.

Workload Attestation: "Who Are You?"
Workload(Your pod)
SPIRE Agent(On same node)
SPIRE Server(Central)
1 Connect to /run/spire/sockets/agent.sock
Agent inspects caller: PID → K8s API → pod name, namespace, SA, labels
2 "Pod in ns:production, sa:payment-service — match?"
3 "Yes — issue SVID: spiffe://acme.com/.../payment-service"
4 X.509 SVID + private key + trust bundle
Auto-rotates before expiry. Workload never manages keys.

SPIRE supports multiple attestors — plugins that verify workload identity on different platforms:

# Kubernetes attestor selectors:
-selector k8s:ns:production               # Pod is in namespace "production"
-selector k8s:sa:payment-service           # Pod uses service account "payment-service"
-selector k8s:pod-label:app:payments       # Pod has label app=payments
-selector k8s:container-name:main          # Specific container in the pod

# AWS attestor selectors:
-selector aws:iamrole:arn:aws:iam::123:role/my-role  # EC2 instance role
-selector aws:sgid:sg-12345                           # Security group
-selector aws:tag:env:production                      # Instance tag

# Docker attestor selectors:
-selector docker:image-id:sha256:abc123    # Specific image hash
-selector docker:label:service:payments    # Docker label

# The beauty: SPIRE doesn't care WHERE your workload runs.
# Kubernetes, VMs, Docker, bare metal — same identity system.

Trust Domains and Federation

A trust domain is a zone of trust — all workloads within a trust domain share the same root certificates and can verify each other. But what if Service A in us-east.acme.com needs to call Service B in eu-west.acme.com? That's where federation comes in.

SPIFFE Federation: Cross-Cluster Trust
Federation Trust bundle exchange
Workloads in different domains can verify each other
🇺🇸us-east.acme.comUS East cluster
🇪🇺eu-west.acme.comEU West cluster
🏢partner.bigcorp.comPartner company
💻onprem.acme.comOn-prem datacenter
# Set up federation between two SPIRE servers:

# On us-east SPIRE server: trust eu-west
spire-server bundle set \
  -id spiffe://eu-west.acme.com \
  -path /path/to/eu-west-bundle.json

# On eu-west SPIRE server: trust us-east
spire-server bundle set \
  -id spiffe://us-east.acme.com \
  -path /path/to/us-east-bundle.json

# Now workloads in us-east can verify SVIDs from eu-west and vice versa.
# Service A in US can call Service B in EU with full mTLS verification.
# No shared secrets. No VPN. Just cryptographic trust.

# For partner companies:
# Exchange trust bundles out-of-band (email, secure portal).
# Now your payment service can call BigCorp's API with mTLS,
# and both sides cryptographically verify the other's identity.
# No API keys to rotate. No shared credentials to leak.

SPIRE on Kubernetes — Full Setup

# Deploy SPIRE on Kubernetes using Helm

# 1. Add the SPIFFE helm repo
helm repo add spiffe https://spiffe.github.io/helm-charts-hardened/
helm repo update

# 2. Install SPIRE server
helm install spire-server spiffe/spire-server \
  --namespace spire-system --create-namespace \
  --set trustDomain=mycompany.com

# 3. Install SPIRE agent (DaemonSet — runs on every node)
helm install spire-agent spiffe/spire-agent \
  --namespace spire-system

# 4. Register workloads (tell SPIRE which pods get which identity)
kubectl exec -n spire-system spire-server-0 -- \
  spire-server entry create \
  -spiffeID spiffe://mycompany.com/ns/production/sa/payment-service \
  -parentID spiffe://mycompany.com/spire/agent/k8s_psat/default \
  -selector k8s:ns:production \
  -selector k8s:sa:payment-service

# Any pod in namespace=production with serviceAccount=payment-service
# automatically gets: spiffe://mycompany.com/ns/production/sa/payment-service

# 5. Verify it works:
kubectl exec -n production payment-service-pod -- \
  /opt/spire/bin/spire-agent api fetch x509 \
  -socketPath /run/spire/sockets/agent.sock
# Shows the X.509 SVID with the SPIFFE ID embedded

Using SPIRE SVIDs in Go

package main

import (
    "context"
    "fmt"
    "log"
    "net/http"

    "github.com/spiffe/go-spiffe/v2/spiffetls"
    "github.com/spiffe/go-spiffe/v2/spiffetls/tlsconfig"
    "github.com/spiffe/go-spiffe/v2/spiffeid"
    "github.com/spiffe/go-spiffe/v2/workloadapi"
)

func main() {
    ctx := context.Background()

    // Connect to SPIRE Workload API (auto-discovers via socket)
    source, err := workloadapi.NewX509Source(ctx)
    if err != nil {
        log.Fatalf("Unable to create X509Source: %v", err)
    }
    defer source.Close()

    // Get our own SVID (identity)
    svid, err := source.GetX509SVID()
    if err != nil {
        log.Fatalf("Unable to get SVID: %v", err)
    }
    fmt.Printf("My identity: %s\n", svid.ID)

    // ── Server: Accept connections only from specific SPIFFE IDs ──
    authorizedCaller := spiffeid.RequireIDFromString(
        "spiffe://mycompany.com/ns/production/sa/order-service",
    )

    listener, err := spiffetls.Listen(ctx, "tcp", ":8443",
        tlsconfig.AuthorizeID(authorizedCaller),
    )
    if err != nil {
        log.Fatalf("Unable to create TLS listener: %v", err)
    }

    http.HandleFunc("/api/charge", func(w http.ResponseWriter, r *http.Request) {
        // The caller's SPIFFE ID has been verified by SPIRE
        fmt.Fprintln(w, "Payment processed! Caller verified.")
    })
    log.Println("Payment service on :8443 (SPIFFE mTLS)")
    log.Fatal(http.Serve(listener, nil))
}

// ── Client: Connect using SPIFFE identity ──
func callPaymentService(ctx context.Context) {
    targetID := spiffeid.RequireIDFromString(
        "spiffe://mycompany.com/ns/production/sa/payment-service",
    )

    conn, err := spiffetls.Dial(ctx, "tcp", "payment-service:8443",
        tlsconfig.AuthorizeID(targetID),
    )
    if err != nil {
        log.Fatalf("Unable to connect: %v", err)
    }
    defer conn.Close()
    // Connection is mTLS-protected with auto-rotated certificates
    // Zero certificate files. Zero rotation scripts.
}

m2mauth vs SPIFFE/SPIRE — When to Use Which

m2mauth vs SPIFFE/SPIRE
m2mauth (Simple)
No infrastructure to deploy
Works with static cert files
5 minutes to set up
Manual certificate rotation
🎯Best for: 2-20 services, dev/staging
VS
SPIFFE/SPIRE (Production)
Automatic identity assignment
Auto cert rotation (no downtime)
Cross-cluster federation
Requires SPIRE infrastructure
🎯Best for: 20-1000+ services, production

Production Use Cases

  • Uber uses SPIFFE/SPIRE to issue identities for thousands of microservices across multiple data centres. Every service-to-service call is mTLS-authenticated with SVIDs that rotate every hour.
  • Bloomberg deployed SPIRE to replace static service account credentials across their trading platform — eliminating credential leaks as a threat vector.
  • ByteDance (TikTok) uses SPIRE for workload identity across their global Kubernetes infrastructure, enabling zero-trust networking across regions.
  • HPE (Hewlett Packard Enterprise) acquired the SPIFFE/SPIRE project creators and uses it across their hybrid cloud products.
  • Square/Block uses SPIFFE for payment processing services — every transaction flows through mTLS-authenticated connections with automatically rotated certificates.

The "Bottom Turtle" Problem

There's a famous analogy in the SPIFFE community (so famous they named a book after it). It goes like this:

In an old story, someone insists the world rests on the back of a giant turtle. "What's the turtle standing on?" they're asked. "Another turtle." And that one? "It's turtles all the way down!"

Computer security has the same problem. You protect your APIs with secrets (passwords, API keys). You protect the secrets with encryption keys. You protect the encryption keys with a secrets vault. You protect the vault with... more secrets. It's secrets all the way down.

SPIFFE and SPIRE aim to be the bottom turtle — the foundational layer of trust that everything else stands on. Instead of cascading secrets, you have cryptographic identity rooted in platform attestation (the node's identity is verified by the cloud provider or kernel, the workload's identity is verified by the node). No secrets to leak because there are no secrets — just cryptographic proofs.

Think of It as MFA for Workloads

You know how multi-factor authentication (MFA) works for humans — you prove your identity with something you know (password) AND something you have (phone/hardware key). SPIFFE/SPIRE does the same thing for workloads:

  • Something the workload IS: its process attributes (PID, container image hash, Kubernetes service account)
  • Something the workload's node HAS: the node's attestation proof (AWS instance identity document, GCP VM identity token, Kubernetes node certificate)
  • Combined result: a short-lived, cryptographically signed SVID that proves identity without any stored secrets

Beyond Microservices: Where SPIFFE/SPIRE Is Going

Emerging SPIFFE/SPIRE Use Cases (2025-2026)
AI Agent Identity
AI agents that interact with sensitive systems (databases, APIs, cloud resources) need verifiable, short-lived identities — not long-lived API keys. SPIFFE SVIDs provide exactly this: the agent gets an identity, does its work, the identity expires automatically.
Edge Computing Security
Edge nodes in retail stores, factories, and cell towers need to authenticate with central cloud services. SPIRE extends the identity control plane to the edge — same cryptographic verification model, even on far-flung devices with intermittent connectivity.
Service Mesh Trust Foundation
Service meshes like Istio and Linkerd already use SPIFFE under the hood for mTLS between sidecars. But SPIRE can serve as a trust foundation ACROSS meshes — different clusters, different mesh implementations, same identity framework.
Virtual Machine Identity (KubeVirt)
Not everything runs in containers. VMs managed by KubeVirt (or OpenShift Virtualization) can get SPIFFE identities too — same attestation model, same SVIDs, same trust domains. One identity system for containers AND VMs.
Cross-Organisation Federation
Two companies exchanging trust bundles can authenticate each other's workloads without sharing any secrets. Your payment service calls your partner's fraud API — both sides verify with SPIFFE, no API keys exchanged, no secrets vault shared.

SPIFFE/SPIRE in 3 Key Facts

  • Graduated CNCF project: Same maturity level as Kubernetes, Prometheus, and Envoy. Production-proven at the highest scale.
  • Platform-agnostic: Works on Kubernetes, VMs, bare metal, Docker, edge devices. Node and workload attestors exist for AWS, GCP, Azure, and more.
  • Enterprise-ready: Red Hat offers an enterprise SPIFFE/SPIRE implementation as the Red Hat Zero Trust Workload Identity Manager (OpenShift operator). HashiCorp, HPE, and others offer commercial SPIRE distributions too.

Practical Example: E-Commerce Platform with m2mauth + SPIFFE

Let's walk through a real production architecture. You're building an e-commerce platform with 5 microservices. Here's how you'd secure every service-to-service call.

E-Commerce M2M Architecture
SPIRE Server Issues SVIDs to all services
mTLS-authenticated connections
🌐API GatewayPublic entry
🛒Order ServiceProcesses orders
💳Payment ServiceCharges cards
📦Inventory ServiceStock management
// ── Example 1: Order Service calling Payment Service ──
// The Order Service needs to charge a customer's card.
// It must prove its identity to the Payment Service.

package main

import (
    "bytes"
    "context"
    "encoding/json"
    "fmt"
    "log"
    "net/http"

    "github.com/vishalanandl177/m2mauth"
)

// ChargeRequest represents a payment request
type ChargeRequest struct {
    OrderID  string  `json:"order_id"`
    Amount   float64 `json:"amount"`
    Currency string  `json:"currency"`
    UserID   string  `json:"user_id"`
}

// OrderService calls PaymentService with mTLS authentication
func chargeCustomer(order ChargeRequest) error {
    // Create authenticated M2M client
    config := m2mauth.Config{
        CertFile: "/certs/order-service-cert.pem",
        KeyFile:  "/certs/order-service-key.pem",
        CAFile:   "/certs/ca-cert.pem",
    }

    client, err := m2mauth.NewClient(config)
    if err != nil {
        return fmt.Errorf("failed to create M2M client: %w", err)
    }

    // Marshal the request
    body, _ := json.Marshal(order)

    // Call Payment Service — mTLS proves we ARE the Order Service
    resp, err := client.Post(
        "https://payment-service.internal:8443/api/charge",
        "application/json",
        bytes.NewReader(body),
    )
    if err != nil {
        return fmt.Errorf("payment request failed: %w", err)
    }
    defer resp.Body.Close()

    if resp.StatusCode != http.StatusOK {
        return fmt.Errorf("payment failed with status: %d", resp.StatusCode)
    }

    log.Printf("Payment successful for order %s", order.OrderID)
    return nil
}

// ── Example 2: Payment Service (server side) ──
// Only accepts calls from Order Service — rejects everything else

func main() {
    config := m2mauth.Config{
        CertFile: "/certs/payment-service-cert.pem",
        KeyFile:  "/certs/payment-service-key.pem",
        CAFile:   "/certs/ca-cert.pem",
    }

    mux := http.NewServeMux()

    mux.HandleFunc("/api/charge", func(w http.ResponseWriter, r *http.Request) {
        // At this point, mTLS has already verified the caller's certificate.
        // The caller IS the Order Service (or whoever holds the client cert
        // signed by our CA). No API key needed, no JWT needed.

        var req ChargeRequest
        if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
            http.Error(w, "Invalid request", http.StatusBadRequest)
            return
        }

        // Process the payment
        log.Printf("Processing payment: order=%s amount=%.2f %s",
            req.OrderID, req.Amount, req.Currency)

        // In production: call Stripe, validate amount, check fraud, etc.

        w.WriteHeader(http.StatusOK)
        json.NewEncoder(w).Encode(map[string]string{
            "status":     "success",
            "payment_id": "pay_" + req.OrderID,
        })
    })

    mux.HandleFunc("/api/health", func(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintln(w, "OK")
    })

    server, err := m2mauth.NewServer(config, mux)
    if err != nil {
        log.Fatalf("Failed to create server: %v", err)
    }

    log.Println("Payment Service running on :8443 (mTLS required)")
    log.Fatal(server.ListenAndServeTLS(":8443"))
}

Example: Inventory Check with Circuit Breaker

// Real production pattern: m2mauth + circuit breaker + retry
// Order Service checks inventory before placing an order

package main

import (
    "context"
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "time"

    "github.com/vishalanandl177/m2mauth"
)

type InventoryClient struct {
    httpClient *http.Client
    baseURL    string
}

func NewInventoryClient(certFile, keyFile, caFile, baseURL string) (*InventoryClient, error) {
    config := m2mauth.Config{
        CertFile: certFile,
        KeyFile:  keyFile,
        CAFile:   caFile,
    }

    client, err := m2mauth.NewClient(config)
    if err != nil {
        return nil, err
    }

    // Add timeout (production-critical)
    client.Timeout = 5 * time.Second

    return &InventoryClient{
        httpClient: client,
        baseURL:    baseURL,
    }, nil
}

type StockResponse struct {
    ProductID string `json:"product_id"`
    Available int    `json:"available"`
    Reserved  int    `json:"reserved"`
}

func (ic *InventoryClient) CheckStock(ctx context.Context, productID string) (*StockResponse, error) {
    url := fmt.Sprintf("%s/api/stock/%s", ic.baseURL, productID)

    req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
    if err != nil {
        return nil, err
    }

    resp, err := ic.httpClient.Do(req)
    if err != nil {
        return nil, fmt.Errorf("inventory service unreachable: %w", err)
    }
    defer resp.Body.Close()

    if resp.StatusCode != http.StatusOK {
        return nil, fmt.Errorf("inventory check failed: status %d", resp.StatusCode)
    }

    var stock StockResponse
    if err := json.NewDecoder(resp.Body).Decode(&stock); err != nil {
        return nil, err
    }

    return &stock, nil
}

func (ic *InventoryClient) ReserveStock(ctx context.Context, productID string, qty int) error {
    url := fmt.Sprintf("%s/api/stock/%s/reserve", ic.baseURL, productID)
    body := fmt.Sprintf(`{"quantity": %d}`, qty)

    req, err := http.NewRequestWithContext(ctx, "POST", url,
        bytes.NewBufferString(body))
    if err != nil {
        return err
    }
    req.Header.Set("Content-Type", "application/json")

    resp, err := ic.httpClient.Do(req)
    if err != nil {
        return fmt.Errorf("reserve failed: %w", err)
    }
    defer resp.Body.Close()

    if resp.StatusCode != http.StatusOK {
        return fmt.Errorf("reserve failed: status %d", resp.StatusCode)
    }
    return nil
}

// Usage in Order Service:
// inventory, _ := NewInventoryClient(
//     "/certs/order-cert.pem", "/certs/order-key.pem",
//     "/certs/ca-cert.pem", "https://inventory-service.internal:8443",
// )
// stock, err := inventory.CheckStock(ctx, "PROD-123")
// if stock.Available >= orderQty {
//     inventory.ReserveStock(ctx, "PROD-123", orderQty)
// }

Example: SPIFFE + m2mauth Migration (Gradual)

You don't need to switch from m2mauth to SPIRE all at once. Here's how to migrate gradually — one service at a time:

// service_auth.go — Abstraction that supports both m2mauth and SPIFFE
package auth

import (
    "crypto/tls"
    "net/http"
    "os"

    "github.com/vishalanandl177/m2mauth"
    "github.com/spiffe/go-spiffe/v2/workloadapi"
)

// NewAuthenticatedClient returns an mTLS HTTP client.
// Uses SPIFFE if SPIFFE_ENDPOINT_SOCKET is set, otherwise m2mauth.
func NewAuthenticatedClient() (*http.Client, error) {
    spiffeSocket := os.Getenv("SPIFFE_ENDPOINT_SOCKET")

    if spiffeSocket != "" {
        // Production: Use SPIFFE/SPIRE (auto-rotated certificates)
        source, err := workloadapi.NewX509Source(context.Background())
        if err != nil {
            return nil, fmt.Errorf("SPIFFE source failed: %w", err)
        }
        tlsConfig := tlsconfig.MTLSClientConfig(source, source,
            tlsconfig.AuthorizeAny())
        return &http.Client{
            Transport: &http.Transport{TLSClientConfig: tlsConfig},
        }, nil
    }

    // Development/staging: Use m2mauth (static certificates)
    config := m2mauth.Config{
        CertFile: os.Getenv("TLS_CERT_FILE"),
        KeyFile:  os.Getenv("TLS_KEY_FILE"),
        CAFile:   os.Getenv("TLS_CA_FILE"),
    }
    return m2mauth.NewClient(config)
}

// In your service code — works with both:
// client, err := auth.NewAuthenticatedClient()
// resp, err := client.Get("https://payment-service:8443/api/charge")

// Migration strategy:
// 1. Deploy SPIRE to your cluster
// 2. Set SPIFFE_ENDPOINT_SOCKET on ONE service
// 3. That service uses SPIRE, all others still use m2mauth
// 4. Both work because they're both mTLS — compatible!
// 5. Gradually migrate all services to SPIRE
// 6. Remove static cert files when all services are on SPIRE
Gradual Migration: m2mauth → SPIFFE/SPIRE
Phase 1: All services use m2mauth (static certs)
Quick setup. Works for dev, staging, and small production. Manual cert rotation.
Phase 2: Deploy SPIRE, migrate first service
Install SPIRE server + agents. Migrate one non-critical service. Both still talk mTLS — fully compatible.
Phase 3: Migrate remaining services one at a time
Each service switches from static certs to SPIRE SVIDs. No downtime — mTLS works with both cert sources.
Phase 4: Full SPIRE — remove static certs
All services on SPIRE. Auto-rotation, cross-cluster federation, zero manual cert management.

Getting Started: The Practical Path

Your M2M Auth Journey
1m2mauthStart here. Static certs.
2cert-managerAuto-rotate in K8s
3SPIFFE/SPIREFull identity platform

Start with m2mauth to get mTLS working in your Go services today. When you outgrow static certificates (20+ services, multi-cluster, compliance requirements), graduate to SPIFFE/SPIRE for automatic identity management. Both solve the same fundamental problem — proving "I am who I say I am" — at different scales.