How to Deploy to Remote Machines¶
System Manager can deploy configurations to remote machines via SSH. This guide covers deploying to one or more remote hosts.
Prerequisites¶
- SSH access to the target machine
- Nix installed on the target machine
- Your SSH key configured for passwordless authentication (recommended)
Before you can deploy to a remote system, the remote machine's Nix daemon needs to trust incoming store paths from your local machine. Without this, the remote will reject the files you're trying to copy because they lack a signature from a trusted cache.
Edit /etc/nix/nix.conf on the remote system and ensure these lines are present; you'll need to add the appropriate usernames, such as "ubuntu" if you're using an Amazon EC2 Ubuntu server:
Then restart the Nix daemon:
Basic Remote Deployment¶
Deploy your local configuration to a remote machine:
Using a Remote Flake¶
Deploy a configuration hosted on GitHub directly to a remote machine:
Using Different Configurations per Host¶
If you have different configurations for different hosts, use the flake output name:
SSH Configuration Tips¶
To connect successfully, you have a couple of options for handling SSH authentication.
Option 1: SSH Config Entry¶
For easier deployment, add your hosts to ~/.ssh/config:
Then deploy with just:
Option 2: SSH Agent¶
If you prefer not to modify your SSH config, you can load your key into the SSH agent before running System Manager:
This approach is particularly useful for automation scripts, as we'll see in the next section.
Unsupported Format
You might be tempted to use Nix's query parameter syntax for SSH keys:
'ubuntu@172.31.40.14?ssh-key=/home/ubuntu/.ssh/my.pem'
Although Nix supports this format for some operations, System Manager does not. It will attempt to connect to the entire string literally as a hostname, resulting in a "Could not resolve hostname" error.
Deploying to Multiple Systems¶
One of the major benefits of System Manager is the ability to manage entire fleets of machines with consistent configurations. Rather than manually connecting to each server, you can script deployments to dozens or even hundreds of systems.
Here's a basic example that deploys to multiple systems:
Reading Hosts from a File¶
For larger deployments, or when you want to separate your host inventory from your scripts, you can read the target addresses from an external file:
Where hosts.txt looks like:
This makes it easy to maintain different host lists for different environments, or to generate the list dynamically from your infrastructure tooling.
Handling SSH Host Key Verification¶
If this is the first time you've connected to the remote systems via SSH, you'll encounter the familiar host key verification prompt:
This will interrupt automated scripts. There are some different approaches to handle this.
Approach 1: Pre-scan Host Keys¶
You can scan and add all host keys to your known_hosts file before deploying. This is explicit and works regardless of your SSH configuration:
Approach 2: Wildcard SSH Config¶
If your servers are all within a predictable subnet (common in cloud environments like AWS VPCs), you can use a wildcard pattern in your SSH config to handle authentication and host key verification automatically:
Add this to ~/.ssh/config:
Note
StrictHostKeyChecking accept-new automatically trusts and saves host keys for servers you've never connected to before, without asking for confirmation. However, it will still warn you if a previously-saved key changes, which could indicate a security issue or a reinstalled server.
With this configuration in place, your deployment script becomes much simpler—no agent setup or key scanning required:
Tip
The pre-scanning approach works well if you want to keep everything in a single self-contained script, which is ideal for CI/CD pipelines and automation systems where you may not have control over the SSH config.
Deploying from a Remote Flake¶
So far, all examples have used --flake . to reference configuration files on your local machine. But you can also host your Nix configuration in a remote Git repository and deploy directly from there. This is powerful for CI/CD workflows where the configuration lives in version control and deployments are triggered automatically.
Simply replace the . with a flake URL pointing to your repository:
Keep Your Flake Lock Updated
When using remote flakes, make sure the repository's flake.lock file references a compatible version of System Manager. If the lock file points to an older version, you may encounter errors about missing binaries like system-manager-engine. Run nix flake update in your repository to update the lock file to the latest version.
See Also¶
- Use Remote Flakes - Host your configuration on GitHub
- CLI Reference - Full command documentation