SSHOT - SSH Orchestrator Tool
SSHOT (SSH Orchestrator Tool) is a lightweight, Ansible-inspired tool designed for sysadmins who need straightforward SSH orchestration without Python dependency headaches. Built with Go for portability and simplicity, it uses familiar YAML playbooks—perfect for daily administrative tasks.
Table of Contents
- Why SSHOT?
- Installation
- Quick Start
- Core Concepts
- Usage Examples
- Command Line Reference
- Configuration Reference
- Advanced Features
- Troubleshooting
Why SSHOT?
If you’re a sysadmin who loves Ansible’s YAML approach but sometimes finds Python dependencies challenging, SSHOT might be for you.
SSHOT is NOT a replacement for Ansible - it doesn’t try to be. Ansible is a comprehensive automation platform with an extensive ecosystem. SSHOT is simply a focused helper tool for sysadmins who need straightforward SSH orchestration.
Key Benefits
- 🪶 No Python headaches - Single Go binary, no dependencies, no virtualenvs, no pip issues
- 🎯 Sysadmin-focused - Built for daily SSH tasks, not enterprise-wide automation
- ⚡ Portable - Copy one binary, run anywhere (Linux, macOS, even on edge devices)
- 📝 Familiar syntax - If you know Ansible YAML, you already know SSHOT
- 🚀 Fast - Go’s performance for quick task execution
Installation
From Release Binary
# Download from GitHub releases
wget https://github.com/fgouteroux/sshot/releases/latest/download/sshot_Linux_x86_64.tar.gz
tar xzf sshot_Linux_x86_64.tar.gz
sudo mv sshot /usr/local/bin/
Using Go Install
go install github.com/fgouteroux/sshot@latest
Build from Source
git clone https://github.com/fgouteroux/sshot.git
cd sshot
go build -o sshot
sudo mv sshot /usr/local/bin/
Quick Start
1. Create a Simple Inventory
# inventory.yml
ssh_config:
user: admin
key_file: ~/.ssh/id_rsa
port: 22
hosts:
- name: web1
address: 192.168.1.10
- name: web2
address: 192.168.1.11
2. Create a Basic Playbook
# playbook.yml
name: Deploy Application
tasks:
- name: Update system
command: apt-get update
sudo: true
- name: Install nginx
command: apt-get install -y nginx
sudo: true
- name: Start nginx
command: systemctl start nginx
sudo: true
3. Run SSHOT
sshot -i inventory.yml playbook.yml
Core Concepts
Playbooks and Inventory
SSHOT uses two key YAML files to define your automation:
- Inventory - Defines servers, groups, and SSH connection details
- Playbook - Defines tasks to execute on servers
You can use separate files or combine them into a single file.
Inventory Structure
The inventory defines:
- SSH configuration defaults
- Hosts with their connection details
- Host grouping and execution order
- Variables for use in tasks
Playbook Structure
The playbook defines:
- A sequence of tasks to run
- Task dependencies and conditions
- Execution options (parallel or sequential)
- Retry logic and error handling
Task Types
SSHOT supports multiple task types:
- Command - Execute shell commands
- Script - Upload and run local scripts
- Copy - Transfer files to remote hosts
- Wait For - Wait for a condition to be met
Usage Examples
Basic Example
This example updates packages on a single server:
# inventory.yml
ssh_config:
user: admin
key_file: ~/.ssh/id_rsa
hosts:
- name: server1
address: 192.168.1.100
# playbook.yml
name: Update Packages
tasks:
- name: Update package lists
command: apt-get update
sudo: true
- name: Upgrade packages
command: apt-get upgrade -y
sudo: true
Run it:
sshot -i inventory.yml playbook.yml
Web Server Deployment
This example deploys a web server with configuration:
# inventory.yml
ssh_config:
user: admin
key_file: ~/.ssh/id_rsa
hosts:
- name: webserver
address: 192.168.1.100
# playbook.yml
name: Deploy Web Server
tasks:
- name: Install nginx
command: apt-get install -y nginx
sudo: true
- name: Copy configuration
copy:
src: ./nginx.conf
dest: /etc/nginx/nginx.conf
mode: "0644"
sudo: true
- name: Start nginx
command: systemctl restart nginx
sudo: true
- name: Wait for service
wait_for: port:80
- name: Verify service
command: curl -s http://localhost
register: curl_output
Run it:
sshot -i inventory.yml playbook.yml
Multi-tier Application Deployment
This example uses groups for ordered deployment:
# inventory.yml
ssh_config:
user: admin
key_file: ~/.ssh/id_rsa
groups:
- name: database
order: 1
hosts:
- name: db1
address: 192.168.1.10
- name: application
order: 2
depends_on: [database]
hosts:
- name: app1
address: 192.168.1.20
- name: app2
address: 192.168.1.21
- name: loadbalancer
order: 3
depends_on: [application]
hosts:
- name: lb1
address: 192.168.1.30
# playbook.yml
name: Deploy Application Stack
tasks:
- name: Update system
command: apt-get update
sudo: true
- name: Install required packages
command: apt-get install -y {{ .packages }}
sudo: true
vars:
packages: {{ .role_packages }}
- name: Start services
command: systemctl restart {{ .service }}
sudo: true
vars:
service: {{ .role_service }}
- name: Health check
command: {{ .health_cmd }}
retries: 5
retry_delay: 2
Run it:
sshot -i inventory.yml playbook.yml
Conditional Task Execution
This example shows conditional tasks based on host variables:
# inventory.yml
ssh_config:
user: admin
key_file: ~/.ssh/id_rsa
hosts:
- name: ubuntu-server
address: 192.168.1.10
vars:
os: ubuntu
version: "20.04"
- name: centos-server
address: 192.168.1.11
vars:
os: centos
version: "8"
# playbook.yml
name: OS-specific Updates
tasks:
- name: Update Ubuntu
command: apt-get update
sudo: true
when: "{{.os}} == ubuntu"
- name: Update CentOS
command: yum update -y
sudo: true
when: "{{.os}} == centos"
- name: Install common tools
command: "{{.os}} == ubuntu && apt-get install -y vim || yum install -y vim"
sudo: true
Run it:
sshot -i inventory.yml playbook.yml
Command Line Reference
sshot [options] <playbook.yml>
Options
| Option | Description |
|---|---|
-i, --inventory <file> |
Path to inventory file (if separate from playbook) |
-n, --dry-run |
Run in dry-run mode (simulate without executing) |
-v, --verbose |
Enable verbose logging |
-p, --progress |
Show progress indicators for long-running tasks |
-f, --full-output |
Show complete command output without truncation |
--no-color |
Disable colored output |
Examples
Basic execution:
sshot playbook.yml
With separate inventory:
sshot -i inventory.yml playbook.yml
Dry-run mode with verbose output:
sshot -n -v -i inventory.yml playbook.yml
With progress indicators:
sshot --progress -i inventory.yml playbook.yml
With full output:
sshot -f -i inventory.yml playbook.yml
Configuration Reference
Inventory
SSH Configuration
ssh_config:
user: admin # Default SSH user
password: secret # Default password (not recommended)
key_file: ~/.ssh/id_rsa # Path to SSH key
key_password: passphrase # SSH key passphrase
port: 22 # Default SSH port
use_agent: true # Use SSH agent for auth
strict_host_key_check: true # Verify host keys
Hosts
hosts:
- name: server1 # Name for display
address: 192.168.1.10 # IP address
hostname: server1.example.com # DNS hostname (alternative to address)
user: admin # Override default user
password: secret # Override default password
key_file: ~/.ssh/custom_key # Override default key file
port: 2222 # Override default port
vars: # Host variables
role: webserver
env: production
Groups
groups:
- name: webservers # Group name
order: 1 # Execution order
parallel: true # Execute hosts in parallel
depends_on: [databases] # Group dependencies
hosts:
- name: web1
address: 192.168.1.10
- name: web2
address: 192.168.1.11
Playbook
Basic Structure
name: My Playbook # Playbook name
parallel: false # Global parallel execution setting
tasks: # List of tasks
- name: Task 1 # Task name
command: echo "Hello" # Command to execute
Task Types
Command Task:
- name: Execute command
command: service nginx restart
sudo: true # Run with sudo
Shell Task:
- name: Execute shell command
shell: find /var/log -name "*.log" | xargs ls -la
Script Task:
- name: Run local script
script: ./scripts/setup.sh # Local script path
Copy Task:
- name: Copy file
copy:
src: ./local/file.txt # Local file path
dest: /remote/path/file.txt # Remote destination
mode: "0644" # File permissions
Wait For Task:
- name: Wait for port
wait_for: port:8080 # Wait for port to be available
Task Options
- name: Complex task example
command: deploy.sh
sudo: true # Run with sudo
when: "{{.env}} == production" # Condition for execution
register: deploy_output # Store output in variable
ignore_error: true # Continue on error
vars: # Task variables
version: "2.0"
depends_on: [Previous Task] # Task dependencies
retries: 3 # Retry count
retry_delay: 5 # Seconds between retries
timeout: 60 # Task timeout in seconds
until_success: true # Retry until success
allowed_exit_codes: [0, 1] # Accept these exit codes as success
Advanced Features
Facts Collection in SSHOT
Overview
The facts collection feature allows you to gather system information from remote hosts before executing tasks. This information (facts) can then be used in your tasks for conditional execution or dynamic command generation, similar to Ansible facts.
Facts are collected using configurable commands that output JSON data. By default, you can use tools like Puppet’s Facter to gather detailed system information.
Configuration
Facts collection is configured in the facts section of your playbook:
playbook:
name: My Playbook
facts:
collectors:
- name: puppet_facts
command: facter --json
sudo: true
- name: app_status
command: /usr/local/bin/app-status.sh --json
sudo: false
tasks:
- name: OS-specific Task
command: echo "Running on {{.puppet_facts.os.name}}"
when: "{{.puppet_facts.os.family}} == RedHat"
Collector Configuration
Each collector is defined with the following properties:
| Property | Description | Required |
|---|---|---|
name |
Name of the fact collection, used to access the facts | Yes |
command |
Command to execute that returns JSON data | Yes |
sudo |
Whether to run the command with sudo | No (default: false) |
Using Facts in Tasks
Facts are available as variables in your tasks, using the collector name as the prefix:
Basic Usage
- name: Show OS Information
command: echo "Running on {{.puppet_facts.os.name}} {{.puppet_facts.os.release.full}}"
Conditional Execution
- name: Debian-specific Task
command: apt-get update
when: "{{.puppet_facts.os.family}} == Debian"
- name: RedHat-specific Task
command: yum update
when: "{{.puppet_facts.os.family}} == RedHat"
Nested Facts
Facts can have nested structures, which can be accessed using dot notation:
- name: Show Memory Information
command: echo "Total memory: {{.puppet_facts.memory.system.total}}"
Using Puppet Facter
Facter is a system profiling library from Puppet that collects facts about the system it runs on. It’s an excellent tool for gathering comprehensive system information.
Installing Facter
If you’re not using the full Puppet agent, you can install just Facter:
On Debian/Ubuntu:
sudo apt-get install facter
On RedHat/CentOS:
sudo yum install facter
Standalone Installation:
# Download and install Puppet's release package
wget https://apt.puppetlabs.com/puppet7-release-focal.deb
sudo dpkg -i puppet7-release-focal.deb
sudo apt-get update
sudo apt-get install facter
Using Facter with SSHOT
Once Facter is installed, you can use it in your facts collectors:
facts:
collectors:
- name: system_facts
command: /opt/puppetlabs/bin/facter --json
sudo: true
Note that Facter is typically installed at facter. You may need to specify the full path when using sudo, as shown above.
Custom Fact Collectors
You can create your own custom fact collectors by writing scripts that output JSON data:
Example: Custom Application Status Collector
Create a script that outputs JSON:
#!/bin/bash
# /usr/local/bin/app-status.sh
echo '{'
echo ' "version": "1.2.3",'
echo ' "status": "running",'
echo ' "connections": 42,'
echo ' "uptime": "3d 2h 15m"'
echo '}'
Then use it in your playbook:
facts:
collectors:
- name: app_status
command: /usr/local/bin/app-status.sh
sudo: false
Access the facts in your tasks:
- name: Show Application Status
command: echo "App version {{.app_status.version}} is {{.app_status.status}}"
- name: Restart if Connections Too High
command: systemctl restart myapp
when: "{{.app_status.connections}} > 100"
Troubleshooting
Command Not Found
If you get a “command not found” error when using Facter with sudo, make sure to use the full path to the Facter executable:
command: facter --json
JSON Parsing Errors
The output of your collector commands must be valid JSON. If you’re creating a custom collector script, make sure it outputs properly formatted JSON.
To test your JSON output:
/usr/local/bin/my-collector.sh | jq
If jq reports errors, your JSON is not valid.
Accessing Facts in Templates
If you have complex nested facts, you can use the dot notation to access nested values:
{{.puppet_facts.networking.interfaces.eth0.ip}}
Examples
Basic System Information Playbook
inventory:
ssh_config:
user: admin
key_file: ~/.ssh/id_rsa
hosts:
- name: server1
address: 192.168.1.10
playbook:
name: System Information
facts:
collectors:
- name: system
command: facter --json
sudo: true
tasks:
- name: Show System Information
command: echo "Host: {{.system.networking.hostname}}, OS: {{.system.os.name}} {{.system.os.release.full}}, CPU: {{.system.processors.models.0}}, RAM: {{.system.memory.system.total}}"
OS-Specific Deployment
inventory:
ssh_config:
user: deploy
key_file: ~/.ssh/deploy_key
hosts:
- name: web1
address: 192.168.1.10
- name: web2
address: 192.168.1.11
playbook:
name: Deploy Application
facts:
collectors:
- name: os_info
command: facter --json os
sudo: false
tasks:
- name: Install Dependencies (Debian)
command: apt-get install -y nginx nodejs
sudo: true
when: "{{.os_info.os.family}} == Debian"
- name: Install Dependencies (RedHat)
command: yum install -y nginx nodejs
sudo: true
when: "{{.os_info.os.family}} == RedHat"
- name: Deploy Application
command: /usr/local/bin/deploy.sh
sudo: true
Variable Substitution
SSHOT supports variable substitution in commands, scripts, and file content:
# Inventory variables
hosts:
- name: app1
vars:
app_name: myapp
app_port: "8080"
app_path: /opt/myapp
# Task using variables
tasks:
- name: Deploy application
command: deploy {{.app_name}} --port {{.app_port}} --path {{.app_path}}
Task Dependencies
Tasks can depend on other tasks:
tasks:
- name: Install dependencies
command: apt-get install -y build-essential
- name: Build application
command: make build
depends_on: [Install dependencies]
- name: Run tests
command: make test
depends_on: [Build application]
Group Dependencies
Groups can depend on other groups:
groups:
- name: databases
order: 1
hosts: [...]
- name: applications
order: 2
depends_on: [databases]
hosts: [...]
- name: monitoring
order: 3
depends_on: [applications]
hosts: [...]
Task Group Restrictions
Restrict tasks to specific groups:
tasks:
- name: Database Backup
command: /usr/local/bin/backup-db.sh
sudo: true
only_groups: [database]
- name: Web Server Config
command: /etc/nginx/sites-available/default
sudo: true
only_groups: [webserver]
- name: Update All Servers
command: apt-get update
sudo: true
# No only_groups, runs on all hosts
- name: Test Environment Only
command: /usr/local/bin/test-feature.sh
skip_groups: [production]
Retries and Error Handling
tasks:
- name: Unreliable task
command: curl http://api.example.com
retries: 5 # Try 5 times
retry_delay: 2 # 2 seconds between retries
- name: Task that might fail
command: grep "error" /var/log/app.log
ignore_error: true # Continue even if it fails
- name: Task with custom exit codes
command: grep "pattern" file.txt
allowed_exit_codes: [0, 1] # 0=found, 1=not found, both are OK
Timeouts and Progress Indicators
tasks:
- name: Long-running task
command: backup.sh
timeout: 300 # 5 minute timeout
Run with progress indicators:
sshot --progress playbook.yml
Local Action, Delegation, and Run Once
These powerful features allow for more sophisticated orchestration patterns:
Local Action
- name: Run locally
local_action: echo "Running on the local machine"
- name: Fetch remote logs locally
local_action: mkdir -p ./logs/{{ .inventory_hostname }}
- name: Send notification
local_action: curl -X POST https://api.example.com/notify -d "host={{ .inventory_hostname }}"
local_action executes commands on the machine running sshot rather than on the remote hosts. This is useful for:
- Coordinating activities between hosts
- Creating local directories for storing remote data
- Sending notifications about deployment progress
- Interacting with local resources (databases, APIs, files)
- Running commands that require local tools or credentials
For more complex local operations, you can use scripts:
- name: Run complex local operations
local_action: ./scripts/local-tasks.sh {{ .inventory_hostname }} {{ .role }}
Delegate To
- name: Run database backup
command: pg_dump -U postgres mydb > /tmp/backup.sql
delegate_to: db-primary
- name: Health check from load balancer
command: curl -sf http://{{ .inventory_hostname }}:8080/health
delegate_to: loadbalancer
- name: Run locally via delegation
command: ./scripts/notify.sh "Deploying to {{ .inventory_hostname }}"
delegate_to: localhost
The delegate_to option runs a command on a specific host instead of the current host in the execution. Key use cases:
- Database operations that should only run on the primary database server
- Load balancer operations (adding/removing hosts)
- Centralized logging or monitoring tasks
- Specialized operations that require specific tools only available on certain hosts
Important notes:
- The delegated host must exist in your inventory
delegate_to: localhostis equivalent to usinglocal_action- Tasks are completely skipped on non-delegated hosts
- Variables from the original host context remain available
Run Once
- name: Initialize application
command: ./init-database.sh
run_once: true
- name: Send deployment notification
local_action: ./notify.sh "Deployment started"
run_once: true
- name: Run integration tests
command: ./run-tests.sh
run_once: true
register: test_results
The run_once flag ensures a task executes on only one host, even when multiple hosts are targeted:
- Perfect for database migrations and schema updates
- Useful for notifications that should happen once per deployment
- Good for integration testing after deployment
- Helpful for initialization tasks that affect the whole system
By default, run_once tasks execute on the first host in the inventory. Combine with delegate_to to specify which host runs the task.
Combining Features
These features are most powerful when combined:
- name: Database migration
command: ./migrate.sh
delegate_to: db-primary
run_once: true
- name: Send deployment notification with all hosts
local_action: ./notify-slack.sh "Deploying to {{ groups['web'] | join(', ') }}"
run_once: true
- name: Load balancer drain
command: ./lb-control.sh drain {{ .inventory_hostname }}
delegate_to: lb-main
register: lb_status
Advanced Patterns
Rolling Deployment with Load Balancer:
tasks:
- name: Remove from load balancer
command: ./lb-control.sh remove {{ .inventory_hostname }}
delegate_to: loadbalancer
- name: Update application
command: ./deploy.sh {{ .version }}
- name: Verify application health
command: curl -f http://localhost:8080/health
retries: 5
retry_delay: 2
- name: Add back to load balancer
command: ./lb-control.sh add {{ .inventory_hostname }}
delegate_to: loadbalancer
Centralized Backup:
tasks:
- name: Create backup directory
local_action: mkdir -p ./backups/{{ .timestamp }}/{{ .inventory_hostname }}
run_once: true
vars:
timestamp: "{{ date +%Y%m%d-%H%M%S }}"
- name: Backup database
command: pg_dump -Fc mydb > /tmp/mydb.dump
- name: Fetch backup files
local_action: scp {{ .user }}@{{ .inventory_hostname }}:/tmp/mydb.dump ./backups/{{ .timestamp }}/{{ .inventory_hostname }}/
Coordinated Multi-tier Deployment:
tasks:
- name: Notify deployment start
local_action: ./notify.sh "Starting deployment to {{ .env }}"
run_once: true
- name: Update database schema
command: ./migrate-db.sh
delegate_to: db-primary
run_once: true
- name: Rolling update of application servers
command: ./deploy-app.sh
- name: Update load balancers
command: ./update-lb-config.sh
delegate_to: "{{ item }}"
run_once: true
with_items:
- lb1
- lb2
Troubleshooting
SSH Connection Issues
Problem: Host key verification failed
host key verification failed for hostname
Solution:
ssh-keyscan -H hostname >> ~/.ssh/known_hosts
Or disable strict checking in inventory (less secure):
ssh_config:
strict_host_key_check: false
Problem: Authentication failure
Check:
- SSH key permissions:
chmod 600 ~/.ssh/id_rsa - SSH key path is correct in inventory
- Correct username and password
- Try manual SSH:
ssh user@host
Task Execution Issues
Problem: Command not found
Solution: Use full paths to executables or specify the correct shell.
Problem: Permission denied
Solution:
Add sudo: true to tasks requiring elevated privileges.
Problem: Timeouts
Solution: Increase timeout value for long-running tasks:
- name: Long task
command: backup.sh
timeout: 600 # 10 minutes
Playbook Logic Issues
Problem: Task skipped unexpectedly
Check:
- Verify condition syntax in
whenclause - Check variable values with verbose mode:
sshot -v playbook.yml - Ensure dependencies are correctly defined
Problem: Task fails despite retries
Solution:
- Check retry settings:
- name: Flaky task command: unreliable.sh retries: 10 retry_delay: 5 - Consider
ignore_error: trueif task is optional - Use
allowed_exit_codesfor commands with non-zero success codes
Getting Help
For more assistance:
- Create an issue on the GitHub repository
- Check the full source code documentation
License
Apache License 2.0 - see LICENSE file for details.