Orchestrators
Canasta CLI supports two orchestrators for running your wiki stack:
- Docker Compose (default) — the recommended option for most users
- Kubernetes — for production-scale deployments with multi-node, multi-replica web pods, and shared storage
Both orchestrators share the same CLI commands (canasta start, canasta stop, canasta upgrade, etc.) and the same instance directory layout (.env, config/, extensions/, skins/, images/).
Docker Compose
Docker Compose is the default orchestrator. No -o flag is needed:
canasta create -i myinstance -w main -n example.com
This creates a standard Docker Compose stack with containers for the web server, database, Caddy reverse proxy, and other services. All data is stored on the host in the instance directory.
Docker Compose supports all Canasta features including development mode, backup and restore, and observability.
docker-compose.override.yml
Docker Compose automatically merges docker-compose.override.yml with the main docker-compose.yml when present. Use this file to customize services without modifying the managed stack files.
You can provide an override file at creation time with the -r flag:
canasta create -i myinstance -w main -n example.com -r my-overrides.yml
Or create/edit the file directly in the instance directory at any time.
Common use cases:
Custom service images — build a service with additional plugins or configuration:
services:
elasticsearch:
build:
context: ./build
dockerfile: Dockerfile.elasticsearch
image: canasta-elasticsearch-icu:7.10.2
See Custom Elasticsearch plugins for a full example.
Extra volumes — mount additional directories into a container:
services:
web:
volumes:
- ./my-data:/var/www/mediawiki/w/my-data
Environment variables — add or override environment variables for a service:
services:
web:
environment:
- MY_CUSTOM_VAR=value
After editing the override file, restart the instance for changes to take effect:
canasta restart
The override file is included in backups.
Kubernetes
Canasta CLI deploys to pre-existing Kubernetes clusters using Helm charts and (optionally) Argo CD for GitOps reconciliation. The CLI does not provision the cluster itself — you bring your own (k3s, kind, EKS, GKE, AKS, on-prem, etc.).
Prerequisites
The controller (where you run canasta) needs:
| Tool | Purpose | Install (macOS) | Install (Linux) |
|---|---|---|---|
| kubectl | Kubernetes CLI | brew install kubectl |
See docs |
| Helm 3.10+ | Chart deployment | brew install helm |
See docs |
You also need kubectl configured to point at your target cluster (kubectl cluster-info must succeed). For a k3s cluster, that typically means having /etc/rancher/k3s/k3s.yaml readable by the Ansible user (by default it is installed mode 0600 root-owned and kubectl will get a permission-denied error).
If you installed k3s via canasta install k8s-cp (and joined workers with canasta install k8s-worker --cp-host …), the cp host has a working kubeconfig at /etc/rancher/k3s/k3s.yaml plus a 0644 user copy at ~/.kube/config. To run kubectl from a separate controller, copy that file down and rewrite the server: URL to a name covered by the cert SANs (--public-ip on install) — see Help:Multi-node Kubernetes for the full walkthrough.
If you installed k3s outside Canasta (e.g. curl -sfL https://get.k3s.io | sh directly), make the file readable yourself before running canasta commands against the cluster:
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
Run canasta doctor --host <target> to verify all required tools are in place.
Creating an instance
Use -o kubernetes (or the alias -o k8s):
canasta create -o kubernetes -i my-wiki -w main -n my-wiki.example.com \
--ingress-class nginx \
--storage-class nfs \
--access-mode ReadWriteMany \
--tls-email admin@example.com
This:
- Installs Argo CD into the cluster (skip with
--skip-argocd-installif you already have it). - Creates an Argo CD
Applicationpointing at the Canasta Helm chart. - Renders chart values from your instance configuration.
- Argo CD reconciles the chart into the namespace, deploying Caddy, MediaWiki, MariaDB (or whatever Compose components are included via profiles), and any optional services.
Once reconciliation completes, the wiki is reachable at the configured ingress URL.
Managing the instance
The same CLI commands work for both Compose and Kubernetes instances:
canasta stop # Scale Helm release replicas to 0
canasta start # Scale back up
canasta restart # Stop then start
canasta delete # Tear down the Helm release and remove the namespace
canasta upgrade # Pull the latest Canasta image and reconcile
To force an immediate Argo CD reconciliation (rather than waiting for its sync interval):
canasta gitops sync
Storage classes and access modes
Multi-node clusters and multi-replica web pods need shared storage that can be mounted ReadWriteMany across all pods. Canasta CLI supports configurable storage:
| Flag | Purpose | Example |
|---|---|---|
--storage-class |
Kubernetes StorageClass for content PVCs | nfs, efs
|
--access-mode |
PVC access mode | ReadWriteOnce, ReadWriteMany
|
ReadWriteOnce(default) — single-pod access. Works on any cluster, but limits the web deployment to one replica per node.ReadWriteMany— multi-pod access. Required for multi-replica web pods on a multi-node cluster.
For ReadWriteMany, you need a storage backend that supports it. Canasta CLI provides convenience commands to set up two common options:
# NFS shared storage (uses the NFS CSI driver)
canasta storage setup nfs --host node1 --install-server --share /srv/nfs/canasta
# AWS EFS (uses the EFS CSI driver)
canasta storage setup efs --host node1 --filesystem-id fs-12345678
These commands install the appropriate CSI driver and create a StorageClass you can pass to canasta create --storage-class.
Ingress and TLS
The --ingress-class flag tells Canasta which IngressClass to put on the generated Ingress resource. Common values:
nginx— for clusters with the ingress-nginx controllertraefik— k3s ships with this by defaultalb— AWS Load Balancer Controller
For automatic TLS certificates from Let's Encrypt, pass --tls-email:
canasta create -o kubernetes ... --tls-email admin@example.com
Canasta installs cert-manager (if not already present) and sets up a ClusterIssuer that issues Let's Encrypt production certificates. To use the staging CA instead (untrusted by browsers, but avoids rate limits during testing), add --staging-certs.
To skip Canasta's TLS setup entirely (e.g., when your ingress controller already handles TLS), use --skip-tls.
Argo CD integration
Canasta uses Argo CD to reconcile Helm-chart-based deployments. Each Canasta instance becomes an Argo CD Application pointing at the Canasta Helm chart with instance-specific values.
- Default behavior:
canasta create -o kubernetesinstalls Argo CD into the cluster (in theargocdnamespace) if it isn't already present. - Existing Argo CD: use
--skip-argocd-installto skip installation. Configure the namespace with--argocd-namespace. - Force a sync:
canasta gitops synctriggers an immediate reconciliation rather than waiting for Argo CD's polling interval.
Building from source
To deploy a locally built Canasta image to your cluster, use --build-from:
canasta create -o kubernetes --build-from /path/to/workspace \
-i my-wiki -w main -n my-wiki.example.com \
--registry registry.example.com:5000
The CLI builds the image from the workspace and pushes it to --registry (default localhost:5000) so the cluster can pull it. The cluster's worker nodes must be able to reach the registry.
Multi-node deployments
Multi-node Kubernetes deployments are supported when:
- The cluster has more than one worker node.
- Content PVCs use a
ReadWriteMany-capable StorageClass (NFS, EFS, CephFS, etc.). - The web deployment is scaled to multiple replicas (configured via Helm chart values).
See Help:Multi-node Kubernetes for a step-by-step walkthrough (k3s on AWS EC2 as a worked example, shared storage, scaling), and Help:External database for pointing Canasta at an external managed database — the recommended HA pattern when running multi-node. Help:Multi-host management covers cross-host considerations such as sharing configuration between dev / staging / prod clusters via GitOps.
Limitations
The following features are not yet available for Kubernetes instances:
| Feature | Status | Notes |
|---|---|---|
| Development mode (Xdebug) | Not supported | Use Docker Compose for Xdebug debugging. |
Compose-only flags (e.g. --override) |
N/A | Compose-specific overrides don't apply. |
For development and Xdebug-based debugging, Docker Compose remains the recommended orchestrator. For production, scale-out, and multi-host deployments, Kubernetes is the better fit.