Multi-host management

From Canasta Wiki

The Canasta CLI supports managing Canasta instances on remote hosts from a single controller machine, in addition to managing instances on the local host. This page covers the available targeting mechanisms, saved-host inventory, and prerequisites.

When you need it

Multi-host management is useful when:

  • The controller (where you run canasta commands) is separate from the wiki host (where the containers run).
  • You manage several Canasta instances spread across multiple machines.
  • You want to promote configuration from a development host to staging/production via GitOps.

For a single-machine setup where the controller is also the host, you don't need any of this — the CLI defaults to managing local instances.

Prerequisites

  • SSH access from the controller to each remote host. Key-based authentication is recommended; password auth works but you'll be prompted on every command.
  • The remote host must satisfy the target-host prerequisites for whichever orchestrator you plan to use (Docker on the host for Compose; kubectl + cluster access for Kubernetes).
  • Run canasta doctor --host <remote> to verify before creating instances.

The --host flag

A subset of canasta commands accept a --host flag identifying the target machine:

Command How --host is used
canasta create --host <host> Create the new instance on <host>.
canasta list --host <host> List instances filtered to <host>.
canasta version --host <host> Filter the list-all view to instances on <host>.
canasta doctor --host <host> Check dependencies on <host>.
canasta install --host <host> Install dependencies on <host>.
canasta uninstall --host <host> Uninstall dependencies from <host>.

For other per-instance commands (canasta start, canasta backup create, canasta gitops pull, etc.), the host is resolved automatically from the instance registry — the host where the instance was created is recorded in conf.json and used for subsequent commands. You don't pass --host on those commands.

The value of --host is either:

  • An SSH destination, e.g. user@host.example.com or just host.example.com; or
  • A short name from your saved-host inventory (see below).

Inline SSH destination

The simplest form. SSH keys must be in place; new host keys are automatically accepted on first connection.

canasta create --host ubuntu@prod1.example.com -i wiki-prod -w main

Saved hosts (hosts.yml)

For hosts you target repeatedly, save a short name with canasta host add. Saved hosts live in $CANASTA_CONFIG_DIR/hosts.yml on the controller (Linux /etc/canasta/ or ~/.config/canasta/; macOS ~/Library/Application Support/canasta/).

# Save a host
canasta host add --name prod1 --ssh ubuntu@prod1.example.com

# Save a host with a non-default Python interpreter (native mode)
canasta host add --name prod2 --ssh canasta@10.0.0.5 --python /usr/bin/python3

# List saved hosts
canasta host list

# Remove a saved host
canasta host remove --name prod1

After saving, target by short name:

canasta create --host prod1 -i wiki-prod -w main

You can also edit hosts.yml directly for advanced options (SSH port, jump host, custom inventory variables). The file uses standard Ansible inventory format:

all:
  hosts:
    prod1:
      ansible_host: prod1.example.com
      ansible_user: canasta
      ansible_python_interpreter: /usr/bin/python3
      ansible_port: 2222

Listing across hosts

By default, canasta list shows every instance in the registry, regardless of which host it's on. To filter to a single host:

canasta list --host prod1

The same filter is available on canasta version:

canasta version --host prod1

Inspecting a remote instance

To check pod / PVC / ingress / certificate state for a single instance on a remote host without SSH-ing into it and running orchestrator-native commands:

canasta status --id mywiki

The host comes from the instance's registry entry, so once an instance is registered on a host it doesn't matter which laptop or which controller you run canasta status from — it routes to the right host automatically.

Inspecting a remote Kubernetes cluster

When canasta runs on a different machine than the cluster (the typical multi-host setup), several inspection commands accept --host <name> to dispatch via SSH to a host that has a working kubeconfig — usually the control plane:

# StorageClasses installed on the cluster + canasta's default
canasta storage list --host prod1

# Argo CD admin password (decoded from argocd-initial-admin-secret)
canasta argocd password --host prod1

# Argo CD Applications and their sync/health state
canasta argocd apps --host prod1

# Open Argo CD's web UI via an SSH tunnel + kubectl port-forward
canasta argocd ui --host prod1

These replace the ssh prod1 'sudo k3s kubectl …' reach you'd otherwise need. They work against any registered host that can reach the cluster's API server — typically the control plane node, since canasta install k8s-cp writes a working kubeconfig there.

canasta argocd apps lists Argo CD Applications under Canasta's management — i.e., instances that have been put under gitops with canasta gitops init. A freshly created K8s instance (helm-installed by canasta create, with Argo CD itself running in the cluster) does not yet have an Application; the list is empty until gitops init runs. See Help:Orchestrators for the full picture.

Promoting configuration across hosts

Multi-host installs are typically paired with GitOps to share configuration across a dev → staging → prod pipeline. Each host joins a shared gitops repository with a different role (source, sink, or both); changes flow from source hosts to sinks via canasta gitops pull. See the GitOps guide for the full workflow.

See also

  • Help:Orchestrators — choosing between Compose and Kubernetes orchestrators on a target host
  • Help:Backup and restore — Restic-based backup of instances on remote hosts (the same canasta backup commands target hosts via the registry)
  • Help:Installation — installing the Canasta CLI on the controller and on each target host
  • Help:GitOps — GitOps shared/per-host vars and the source/sink role model
  • Help:Multi-node Kubernetes — when "multi-host" also means "multi-node K8s cluster"