Networking and TLS

From Canasta Wiki

This page describes the conceptual model for how a Canasta instance handles incoming network traffic: which component terminates TLS, how domains and ports are configured, what changes between Docker Compose and Kubernetes, and how to put Canasta behind an external reverse proxy. Operational walkthroughs for specific tasks (changing the domain name, running on non-standard ports, redirecting legacy URLs) live on dedicated reference pages — see "See also" at the end.

The request path

Every HTTP request to a Canasta instance flows through three layers:

  1. Edge / TLS — terminates HTTPS, optionally provisions a Let's Encrypt certificate.
  2. Cache — Varnish, in front of MediaWiki, with VCL rules that decide what's cacheable.
  3. Application — the MediaWiki container, served behind Apache.

What sits at the edge differs by orchestrator:

Orchestrator Edge component TLS provisioning
Docker Compose Caddy, running inside the instance, listening on the published HTTP_PORT and HTTPS_PORT (default 80 / 443) Caddy's built-in ACME client (Let's Encrypt HTTP-01).
Kubernetes Ingress controller (nginx, traefik, or cloud LB controller — your choice), configured by Canasta's generated Ingress resource. The Caddy pod inside the instance still serves HTTP internally but does not terminate TLS. cert-manager (auto-installed if not present) issues a Let's Encrypt certificate via a ClusterIssuer tied to your --ingress-class.

The cache and application layers are identical across orchestrators.

Domain configuration

Canasta tracks domain information in two places that operators directly manipulate:

  • .env — has two domain-related variables: MW_SITE_SERVER (the full URL of the primary wiki, e.g. https://example.com) and MW_SITE_FQDN (just the hostname). These describe the primary (first) wiki only.
  • config/wikis.yaml — the per-wiki URL map. In a wiki farm, additional wikis may live on the same domain at different paths, or on different subdomains; their URLs are managed entirely here.

The CLI's canasta config set MW_SITE_SERVER=... command cascades changes from .env into wikis.yaml only for wikis whose current host matches the old MW_SITE_FQDN. Wikis on different hosts (subdomain-based farms with multiple top-level domains) need their wikis.yaml entries edited directly. See Changing the domain name for the full procedure.

TLS issuance

Compose: Caddy-managed

When the instance is created with a real domain name, Caddy obtains a TLS certificate from Let's Encrypt automatically. The ACME HTTP-01 challenge requires the wider internet to be able to reach the host on port 80. Implications:

  • Custom HTTP_PORT values break ACME — the challenge has no way to reach Caddy. Non-standard ports are therefore appropriate for local development and testing only, not for production with auto-TLS. See Help:Running on non-standard ports for the details.
  • If you want production TLS but need to run on non-standard internal ports (e.g., multiple instances on one host), put an external reverse proxy in front; see "Reverse proxy in front" below.

Kubernetes: cert-manager-managed

When canasta create runs against a Kubernetes cluster with --ingress-class <name> and --tls-email <addr>, Canasta:

  1. Installs cert-manager into the cluster (in the cert-manager namespace) if it isn't already present.
  2. Creates a ClusterIssuer backed by Let's Encrypt — production by default, or staging if you pass --staging-certs.
  3. Adds the appropriate cert-manager annotation to the instance's Ingress resource so the issuer provisions a certificate as a secret.
  4. The ingress controller picks up the secret and serves HTTPS.

The HTTP-01 challenge runs through the ingress controller, so the same DNS-must-resolve-to-something-reachable rule applies. Until DNS points at the ingress controller's external endpoint (LoadBalancer hostname, ALB DNS, etc.), the challenge stays in pending and the certificate doesn't issue. See the EKS+RDS journey for the concrete pattern (and the gotcha where stale CNAMEs from prior teardowns block certificate issuance after a re-provision).

To skip Canasta's TLS setup entirely — for example, when an external load balancer or cloud-provided TLS terminator handles certificates — pass --skip-tls at canasta create time.

Ports

Compose

Each Compose instance on a single host needs unique ports. The defaults are HTTP_PORT=80 and HTTPS_PORT=443; override them in the instance's .env (or via canasta config set HTTP_PORT=... HTTPS_PORT=...) for additional instances. Setting HTTPS_PORT via canasta config set automatically updates wikis.yaml, MW_SITE_SERVER, and MW_SITE_FQDN so the URL the wiki advertises matches the port you're listening on. See Help:Running on non-standard ports for the workflow.

Kubernetes

Multiple Kubernetes instances on the same cluster share standard ports (80 / 443) and are differentiated by hostname through the ingress controller. The instance's namespace (canasta-<id>) and Ingress resource name keep them isolated. There's no port collision because the ingress controller is the only thing bound to 80 / 443 — instances are dispatched by host header.

Reverse proxy in front

When Canasta runs behind an external reverse proxy that already terminates TLS — nginx, a cloud load balancer, Cloudflare in "Flexible SSL" mode, etc. — Caddy needs to stay on HTTP only. Otherwise Caddy attempts its own ACME challenges, hits redirect loops, and ports collide.

Set CADDY_AUTO_HTTPS=off in the instance's .env (either at create time via -e envfile, or after the fact with canasta config set CADDY_AUTO_HTTPS=off). Caddy then generates a Caddyfile with http:// site addresses, listens on port 80 only, and trusts the external proxy to handle TLS termination, ACME, and HTTPS-redirect logic.

This is also the right setup when running multiple Compose instances on one host with shared TLS — the external proxy holds the public ports and forwards to each instance on its internal port. See Deploying behind a reverse proxy for the full configuration.

URL redirects (Caddy site config)

For migrating an existing wiki where the URL shape differs from Canasta's (e.g., a previous install at the document root with $wgScriptPath = "" served pages at /index.php?title=Page instead of Canasta's /wiki/Page), Caddy can redirect the legacy patterns to the new ones at the edge. This avoids 404s for external links and bookmarks.

Add redirect rules to the instance's config/Caddyfile.site — Canasta's per-instance Caddyfile snippet that's included into the main Caddy config. See Redirecting legacy URL paths for examples.

Caddyfile.site takes the place of .htaccess in an Apache-based install. It can also carry security headers, custom rate-limit blocks, and other site-wide directives — just be aware that broad query-parameter blocks (e.g., blocking any URL containing returnto=) can catch real user flows; MediaWiki uses returnto on every login link.

Caveats and known limitations

TLS state lives on the operator host (Compose) or in cluster Secrets (K8s)

For Compose instances, Caddy's Let's Encrypt account and issued certificates are stored in the caddy-data Docker volume. Backing up the instance (see Help:Backup and restore) captures this volume. For K8s instances, certificates are Kubernetes Secrets in the instance's namespace and live until cert-manager renews or replaces them.

Multi-replica Caddy is not supported

The caddy-data volume is RWO; Caddy is single-replica. For HA TLS termination on Kubernetes, place an external load balancer in front and pass --skip-tls. See Multi-node Kubernetes — Caveats.

DNS rotation after teardown

For K8s instances with cloud ingress controllers, every fresh install of the ingress controller provisions a new load balancer. CNAMEs pointing at the previous one go stale. cert-manager challenges sit in pending indefinitely until DNS is updated. Always update DNS to today's load-balancer hostname before expecting certificate issuance after a re-provision.

See also