You're building a SaaS product. Ten companies sign up. Then a hundred. Then a thousand. Each company — each tenant — thinks they're the only customer. They expect their data to be private, their experience to be customised, and their performance to be unaffected by what other tenants are doing. Meanwhile, you're running one codebase, one infrastructure, and trying not to go bankrupt on hosting costs.

Welcome to multi-tenancy — the architecture pattern that makes SaaS economically viable. Get it right and you scale to millions of tenants on shared infrastructure. Get it wrong and you have data leaks, noisy neighbours, and midnight pages.

What is Multi-Tenancy?

A tenant is an organisational unit — usually a company, team, or workspace — that uses your SaaS product. Multi-tenancy means multiple tenants share the same application instance and infrastructure, but their data and experience are isolated from each other.

Single-Tenant vs Multi-Tenant
Single-Tenant (one instance per customer)
💻Separate deployment per customer
🔒Strongest isolation (separate everything)
💸Most expensive to operate
🛠N customers = N deployments to maintain
🎯On-prem, regulated industries, enterprise
VS
Multi-Tenant (shared instance, isolated data)
💻One deployment serves all customers
🔒Logical isolation (same DB, filtered by tenant)
💰Most cost-efficient
🛠One deployment to maintain
🎯SaaS, cloud products, PLG startups

The Three Isolation Models

The biggest architectural decision in multi-tenancy is how to isolate tenant data. There are three main approaches, each with different trade-offs:

Tenant Data Isolation Models
Model 1: Shared Database, Shared SchemaAll tenants in one table, filtered by tenant_id column. Simplest. Most common for startups.
Model 2: Shared Database, Separate SchemaEach tenant gets their own schema (PostgreSQL schema or MySQL database). Middle ground.
Model 3: Separate Database per TenantEach tenant has a completely separate database instance. Strongest isolation, highest cost.

Model 1: Shared Database, Shared Schema (The Default)

This is where most SaaS products start, and where many successfully stay forever. Every table has a tenant_id column. Every query filters by it. Simple.

-- PostgreSQL: Shared schema with tenant_id
CREATE TABLE users (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    tenant_id   UUID NOT NULL REFERENCES tenants(id),
    email       VARCHAR(255) NOT NULL,
    name        VARCHAR(255),
    role        VARCHAR(50) DEFAULT 'member',
    created_at  TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(tenant_id, email)  -- Email unique WITHIN a tenant, not globally
);

CREATE TABLE orders (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    tenant_id   UUID NOT NULL REFERENCES tenants(id),
    user_id     UUID REFERENCES users(id),
    total       DECIMAL(10,2),
    status      VARCHAR(50),
    created_at  TIMESTAMPTZ DEFAULT NOW()
);

-- CRITICAL: Create indexes on tenant_id for every table!
CREATE INDEX idx_users_tenant ON users(tenant_id);
CREATE INDEX idx_orders_tenant ON orders(tenant_id);
CREATE INDEX idx_orders_tenant_status ON orders(tenant_id, status);

-- Every query MUST filter by tenant_id
-- ❌ WRONG (data leak!):
SELECT * FROM orders WHERE status = 'pending';

-- ✅ RIGHT:
SELECT * FROM orders WHERE tenant_id = '...' AND status = 'pending';
# Python/Django: Automatic tenant filtering middleware
# Every request must include the tenant context

class TenantMiddleware:
    """Extract tenant from subdomain and inject into request."""
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        # Extract tenant from subdomain: acme.myapp.com -> "acme"
        host = request.get_host().split(':')[0]
        subdomain = host.split('.')[0]

        tenant = Tenant.objects.filter(slug=subdomain).first()
        if not tenant:
            return HttpResponse("Tenant not found", status=404)

        request.tenant = tenant
        return self.get_response(request)

# Django model with automatic tenant filtering
class TenantAwareManager(models.Manager):
    def get_queryset(self):
        # This requires tenant context to be set (via middleware)
        return super().get_queryset()

    def for_tenant(self, tenant):
        return self.get_queryset().filter(tenant=tenant)

class Order(models.Model):
    tenant = models.ForeignKey(Tenant, on_delete=models.CASCADE)
    total = models.DecimalField(max_digits=10, decimal_places=2)
    status = models.CharField(max_length=50)

    objects = TenantAwareManager()

# Usage in views:
def list_orders(request):
    # ALWAYS filter by request.tenant
    orders = Order.objects.for_tenant(request.tenant).filter(status='pending')
    return JsonResponse(list(orders.values()), safe=False)

# ⚠ The risk: one missing .for_tenant() call = data leak across tenants
# Solution: Use Row-Level Security (RLS) in PostgreSQL as a safety net
-- PostgreSQL Row-Level Security (RLS) — the safety net
-- Even if application code forgets to filter, the DB enforces it

ALTER TABLE orders ENABLE ROW LEVEL SECURITY;

CREATE POLICY tenant_isolation ON orders
    USING (tenant_id = current_setting('app.current_tenant_id')::UUID);

-- Before each request, set the tenant context:
-- SET app.current_tenant_id = 'abc-123-def';

-- Now even "SELECT * FROM orders" only returns the current tenant's data
-- RLS is the LAST line of defence against data leaks

Model 2: Shared Database, Separate Schema

Each tenant gets their own PostgreSQL schema (or MySQL database). Tables are identical but namespaced: tenant_acme.orders, tenant_globex.orders.

-- PostgreSQL: Create a schema per tenant
CREATE SCHEMA tenant_acme;
CREATE SCHEMA tenant_globex;

-- Create tables in each schema (same structure)
CREATE TABLE tenant_acme.orders (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    total DECIMAL(10,2),
    status VARCHAR(50),
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE tenant_globex.orders (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    total DECIMAL(10,2),
    status VARCHAR(50),
    created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Switch schema per request using search_path
SET search_path TO tenant_acme;
SELECT * FROM orders;  -- Only sees acme's orders

SET search_path TO tenant_globex;
SELECT * FROM orders;  -- Only sees globex's orders

-- Django: django-tenants library handles this automatically
# settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django_tenants.postgresql_backend',
        'NAME': 'myapp',
    }
}
TENANT_MODEL = 'tenants.Tenant'
MIDDLEWARE = ['django_tenants.middleware.TenantSubdomainMiddleware', ...]

Model 3: Separate Database per Tenant

The nuclear option. Each tenant gets their own database instance. Maximum isolation but maximum operational complexity.

# Separate database per tenant — connection routing
import os

TENANT_DB_MAP = {
    'acme': {
        'host': 'acme-db.cluster.us-east-1.rds.amazonaws.com',
        'name': 'acme_production',
    },
    'globex': {
        'host': 'globex-db.cluster.us-east-1.rds.amazonaws.com',
        'name': 'globex_production',
    },
}

def get_db_connection(tenant_slug):
    """Route to the correct database based on tenant."""
    config = TENANT_DB_MAP.get(tenant_slug)
    if not config:
        raise ValueError(f"Unknown tenant: {tenant_slug}")
    return psycopg2.connect(
        host=config['host'],
        dbname=config['name'],
        user=os.environ['DB_USER'],
        password=os.environ['DB_PASSWORD'],
    )

# Used by: banks, healthcare, government — where regulatory
# requirements mandate complete physical data separation

Comparison: Which Model When?

Isolation Model Comparison
Criteria Shared Schema Schema-per-Tenant DB-per-Tenant
IsolationLogical (tenant_id)Schema-levelPhysical (strongest)
Cost at 1000 tenantsLowestMediumHighest
Noisy neighbour riskHighMediumNone
Data leak riskHighest (one missing WHERE)LowNone
Schema migrationOne migration for allN schemas to migrateN databases to migrate
Cross-tenant analyticsEasy (same table)Possible (UNION)Hard (federated query)
Best forMost SaaS (start here)Mid-market SaaSEnterprise / regulated

Multi-Domain Architecture

Multi-domain means each tenant gets their own subdomain (or even a completely custom domain). This is how Slack (acme.slack.com), Shopify (my-store.myshopify.com), and Notion (acme.notion.site) work.

Multi-Domain Request Routing
Browser(acme.myapp.com)
Load Balancer(Route by host)
Application(Resolve tenant)
1 GET https://acme.myapp.com/dashboard
Wildcard SSL cert: *.myapp.com
2 Forward to app (Host: acme.myapp.com)
Extract "acme" from subdomain → look up tenant
3 Render acme's dashboard with acme's data

Subdomain Routing (The Standard Approach)

# DNS: Wildcard A record
# *.myapp.com -> your load balancer IP
# One DNS record handles ALL tenant subdomains

# nginx: Route all subdomains to the app
server {
    listen 443 ssl;
    server_name *.myapp.com;

    ssl_certificate /etc/ssl/wildcard.myapp.com.pem;
    ssl_certificate_key /etc/ssl/wildcard.myapp.com.key;

    location / {
        proxy_pass http://app-backend;
        proxy_set_header Host $host;
        proxy_set_header X-Tenant-Subdomain $subdomain;
    }

    # Extract subdomain
    set $subdomain "";
    if ($host ~* "^(.+).myapp.com$") {
        set $subdomain $1;
    }
}

# Python/FastAPI: Resolve tenant from subdomain
from fastapi import FastAPI, Request, Depends

app = FastAPI()

async def get_current_tenant(request: Request):
    host = request.headers.get("host", "")
    subdomain = host.split(".")[0]

    tenant = await Tenant.get_by_slug(subdomain)
    if not tenant:
        raise HTTPException(status_code=404, detail="Workspace not found")
    return tenant

@app.get("/api/dashboard")
async def dashboard(tenant: Tenant = Depends(get_current_tenant)):
    # tenant is automatically resolved from the subdomain
    orders = await Order.filter(tenant_id=tenant.id).all()
    return {"tenant": tenant.name, "orders": len(orders)}

Custom Domain Support

Some enterprise tenants want their own domain: app.acme-corp.com instead of acme.myapp.com. This is harder but very valuable for enterprise sales.

# Custom domain flow:
# 1. Tenant registers their domain in your settings page
# 2. They add a CNAME record: app.acme-corp.com -> custom.myapp.com
# 3. Your load balancer accepts the traffic (SNI-based routing)
# 4. You issue a TLS cert for their domain (via Let's Encrypt)
# 5. Your app looks up the tenant by custom domain

# Database: Store custom domain mappings
CREATE TABLE tenant_domains (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    tenant_id UUID NOT NULL REFERENCES tenants(id),
    domain VARCHAR(255) NOT NULL UNIQUE,
    ssl_status VARCHAR(50) DEFAULT 'pending',
    verified_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

# Tenant resolution: check subdomain first, then custom domain
async def resolve_tenant(request: Request):
    host = request.headers.get("host", "").split(":")[0]

    # Check if it's a subdomain of our app
    if host.endswith(".myapp.com"):
        slug = host.replace(".myapp.com", "")
        return await Tenant.get_by_slug(slug)

    # Check custom domain mapping
    mapping = await TenantDomain.get_by_domain(host)
    if mapping:
        return await Tenant.get(id=mapping.tenant_id)

    raise HTTPException(404, "Unknown domain")

# SSL for custom domains: Use Caddy or cert-manager
# Caddy auto-provisions Let's Encrypt certs on first request
# cert-manager (K8s) can handle cert issuance at scale

How Real Companies Do It

Multi-Tenancy at Scale: Real Companies
Company Isolation Model Domain Pattern Notable Detail
SlackShared schema (MySQL)acme.slack.comSharded by workspace — each shard holds ~500 workspaces
ShopifySharded shared schemamy-store.myshopify.com + custom domainsPods architecture — each "pod" serves ~10K shops
AtlassianDB per tenant (migrated)mysite.atlassian.netMigrated from shared to isolated for enterprise compliance
NotionShared schema (PostgreSQL)acme.notion.siteSingle massive PostgreSQL with partitioning
SalesforceShared schema (Oracle)Custom domains~100K tenants per database instance with metadata-driven schema

The Noisy Neighbour Problem

In shared infrastructure, one tenant's heavy workload can degrade performance for everyone else. A single tenant running a massive report at 3 PM shouldn't slow down every other tenant's dashboard.

# Solutions for noisy neighbours:

# 1. Rate limiting per tenant
from fastapi import Request
from slowapi import Limiter

limiter = Limiter(key_func=lambda request: request.state.tenant.id)

@app.get("/api/report")
@limiter.limit("10/minute")  # Per tenant, not global
async def generate_report(request: Request, tenant = Depends(get_tenant)):
    return await run_heavy_report(tenant.id)

# 2. Resource quotas in Kubernetes (per-tenant namespace)
apiVersion: v1
kind: ResourceQuota
metadata:
  name: tenant-acme-quota
  namespace: tenant-acme
spec:
  hard:
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "8"
    limits.memory: "16Gi"
    pods: "20"

# 3. Database connection pooling per tenant
# Use PgBouncer with per-tenant connection limits
# Prevents one tenant from exhausting the connection pool

# 4. Queue isolation
# Separate task queues per tenant tier:
# - Free tier: shared queue, lower priority
# - Pro tier: dedicated queue, higher concurrency
# - Enterprise: dedicated worker pool

Tenant-Aware Caching

# Redis: Prefix keys with tenant_id
import redis

r = redis.Redis()

def cache_get(tenant_id: str, key: str):
    return r.get(f"tenant:{tenant_id}:{key}")

def cache_set(tenant_id: str, key: str, value: str, ttl: int = 300):
    r.setex(f"tenant:{tenant_id}:{key}", ttl, value)

# Usage:
cache_set("acme", "dashboard_stats", json.dumps(stats))
data = cache_get("acme", "dashboard_stats")

# NEVER cache without tenant prefix — that's how data leaks happen
# ❌ r.get("dashboard_stats")  -- whose stats? EVERYONE's mixed together
# ✅ r.get("tenant:acme:dashboard_stats")  -- acme's stats only

Multi-Tenant Security Checklist

Multi-Tenant Security: Non-Negotiable Checklist
1. Every query MUST filter by tenant_id
Use RLS (Row-Level Security) as a safety net. One missing WHERE clause = data breach across tenants.
2. Every cache key MUST be prefixed with tenant
A cache without tenant prefix serves one tenant's data to another. Namespace everything.
3. Every file upload MUST be stored in tenant-scoped paths
s3://uploads/tenant-acme/file.pdf — not s3://uploads/file.pdf. Object-level isolation.
4. Every background job MUST carry tenant context
When a Celery/Sidekiq job runs, it must know which tenant it's processing for. Pass tenant_id explicitly.
5. Every API response MUST be scoped to the requesting tenant
Test: log in as tenant A, try to access tenant B's resources via ID guessing. Should return 403, not data.
6. Audit logging MUST include tenant_id
When something goes wrong, you need to know which tenant was affected. Log tenant_id on every operation.

The Recommended Architecture

Production Multi-Tenant Architecture
DNS: *.myapp.com (wildcard) + custom domain CNAME supportOne wildcard DNS record handles all subdomains. Custom domains via CNAME + cert-manager.
Load Balancer / API Gateway: Resolve tenant from Host headerExtract subdomain or look up custom domain → inject tenant context into request.
Application: Tenant middleware + RLS + scoped servicesEvery request carries tenant context. Services filter by tenant. RLS as safety net.
Database: Shared schema + tenant_id + RLSStart shared, add schema-per-tenant for enterprise tier. Partition large tables by tenant_id.
Cache / Storage: Tenant-prefixed keys and pathsRedis: tenant:{id}:key. S3: s3://bucket/tenant-{id}/. Queues: tenant-{id}-tasks.

Start with Model 1 (shared schema + tenant_id). Add RLS from day one. Support subdomains first, custom domains later. Use tenant middleware to inject context everywhere. Rate limit per tenant. Prefix all cache keys and file paths. And test, test, test: log in as tenant A, try to access tenant B's data. If you can — you have a bug that needs fixing before launch.

Multi-tenancy is not a feature you add later. It's a foundation you build from the first database migration. Get it right early and you'll scale from 10 tenants to 10,000 without rewriting your architecture.