Binary Skills
A binary skill is a compiled CLI tool that agents invoke via ion run. Unlike a plain skill — which is a static SKILL.md file with instructions — a binary skill is an executable that can perform actions, query external services, process data, and produce structured output.
Use a binary skill when the task requires more than instructions:
| Plain skill | Binary skill |
|---|---|
| Instructions the agent follows | A tool the agent runs |
| Pure text, no dependencies | Can call APIs, read files, shell out |
Distributed as a SKILL.md | Distributed as compiled binary + SKILL.md |
| Always available | Requires ion run to invoke |
The SKILL.md of a binary skill tells the agent when and how to invoke the executable — the executable does the actual work.
Installing and running
Install from GitHub (Ion downloads the pre-built binary from the repo’s releases):
ion add owner/mytool
Run it:
ion run mytool [subcommand] [args...]
To override the inferred skill name:
ion add owner/mytool --name custom-name
ion run custom-name [args...]
Creating a binary skill
Scaffold a new binary skill project with a single command:
$ ion init --bin my-linter
✓ Cargo project scaffolded
✓ SKILL.md created
Set up GitHub Actions CI/CD? [Y/n]
✓ .github/workflows/ci.yml
✓ .github/workflows/release.yml
✓ .github/workflows/release-plz.ymlThis creates a Cargo crate with src/main.rs (clap CLI with self subcommands pre-wired), build.rs (embeds the build target and copies SKILL.md), a starter SKILL.md, and GitHub Actions workflows for CI, releases, and automated version bumps.
Scaffold a new binary skill project with `ion init --bin <name>`. Accept the CI/CD setup prompt, then help me implement the core subcommands and write the SKILL.md so agents know when and how to invoke it.
Project structure
my-linter/
├── Cargo.toml # declares ionem as a dependency
├── build.rs # embeds target triple and SKILL.md
├── SKILL.md # static skill description (bundled into binary)
└── src/
└── main.rs # clap CLI with self subcommands
Declare the project as a binary skill in Ion.toml:
[project]
type = "binary"
binary = "my-linter" # optional, defaults to Cargo.toml package name
The ionem library
All Ion binary skills are expected to implement a standard self subcommand interface. The ionem crate provides this out of the box.
Add it to your project:
cargo add ionem
cargo add --build ionem --no-default-features
Wire it up in build.rs:
fn main() {
ionem::build::emit_target(); // embeds TARGET env var
ionem::build::copy_skill_md(); // copies SKILL.md into OUT_DIR
}
And in src/main.rs:
use ionem::self_update::SelfManager;
fn manager() -> SelfManager {
SelfManager::new(
"owner/my-linter", // GitHub repo
"my-linter", // binary name
"v", // tag prefix
env!("CARGO_PKG_VERSION"),
env!("TARGET"), // injected by build.rs
)
}
The self subcommand convention
Every binary skill must expose these subcommands:
ion run my-linter self skill # prints SKILL.md to stdout (used by ion during install)
ion run my-linter self info # version, target, path
ion run my-linter self check # check for newer release on GitHub
ion run my-linter self update # download and install latest release
ionem::SelfManager implements all four. Wire them in your match block and the skill is fully Ion-compatible.
Development workflow
Register your local checkout in dev mode — no release build needed:
# From your consumer project
$ ion add --dev ../my-linter
✓ Registered 'my-linter' (dev mode) — ion run forwards to cargo runEvery ion run my-linter [args] now calls cargo run -- [args] in the source directory, picking up your latest changes immediately.
Switch back to the published release:
$ ion add owner/my-linter --bin I'm developing a binary skill locally. Run `ion add --dev .` to register it in dev mode so `ion run` forwards to `cargo run`. Help me test and iterate without a release build.
Publishing a release
Tag a release — the scaffolded GitHub Actions workflow takes it from there:
git tag v1.0.0
git push origin v1.0.0The release workflow builds and attaches binaries for four targets:
| Target | File |
|---|---|
aarch64-apple-darwin | my-linter-1.0.0-aarch64-apple-darwin.tar.gz |
x86_64-apple-darwin | my-linter-1.0.0-x86_64-apple-darwin.tar.gz |
aarch64-unknown-linux-musl | my-linter-1.0.0-aarch64-unknown-linux-musl.tar.gz |
x86_64-unknown-linux-musl | my-linter-1.0.0-x86_64-unknown-linux-musl.tar.gz |
Ion detects the user’s platform and downloads the matching asset automatically.
Users install with:
ion add owner/my-linter Help me publish a new release of this binary skill. Tag the version, verify the release workflow builds binaries for all targets, and confirm the asset naming follows the ion convention.
Asset naming convention
Release tarballs must follow this pattern for Ion to detect them:
{binary}-{version}-{target}.tar.gz
For example: my-linter-1.0.0-aarch64-apple-darwin.tar.gz
Each tarball should contain the binary at its root (not nested in a subdirectory).
Testing with scenario
The scenario crate is the standard test harness for Ion binary skills. It runs your CLI under controlled terminal conditions — with or without a TTY, at specific dimensions, with isolated env and working directories — so you can test both piped output and interactive behavior.
Add it to your [dev-dependencies]:
cargo add --dev scenario
Non-interactive tests
Use Scenario::new to build a run and .run() to execute it. The default terminal is piped (is_terminal() returns false in the child).
use scenario::Scenario;
const MY_LINTER: &str = env!("CARGO_BIN_EXE_my-linter");
#[test]
fn help_exits_successfully() {
let output = Scenario::new(MY_LINTER)
.args(["--help"])
.run()
.unwrap();
assert!(output.success());
assert!(output.stdout().contains("Usage:"));
}
#[test]
fn check_command_fails_on_bad_input() {
let output = Scenario::new(MY_LINTER)
.args(["check", "--strict"])
.current_dir("/tmp/bad-project")
.run()
.unwrap();
assert!(!output.success());
assert!(output.stderr().contains("error:"));
}
Output provides:
| Method | Description |
|---|---|
.success() | true if exit code is 0 |
.exit_code() | numeric exit code |
.stdout() | ANSI-stripped stdout |
.stderr() | ANSI-stripped stderr (empty in PTY mode) |
PTY mode — testing TTY-aware output
Pass Terminal::pty(cols, rows) to run the child in a real pseudo-terminal. The child process sees is_terminal() == true, so color output, spinners, and TTY-gated prompts activate.
use scenario::{Scenario, Terminal};
#[test]
fn pty_output_has_color() {
let output = Scenario::new(MY_LINTER)
.args(["check"])
.terminal(Terminal::pty(80, 24))
.run()
.unwrap();
// PTY merges stdout + stderr into stdout
assert!(output.stdout().contains("✓") || output.stdout().contains("error"));
}
Project fixtures
Project creates an isolated temp directory for each test — no shared state between runs.
use scenario::{Project, Scenario};
#[test]
fn check_passes_on_clean_project() {
// Empty temp directory
let project = Project::empty()
.file("src/main.rs", "fn main() {}")
.build()
.unwrap();
let output = Scenario::new(MY_LINTER)
.args(["check"])
.project(&project) // sets current_dir to project.path()
.run()
.unwrap();
assert!(output.success());
}
For repeatable fixtures, save template directories under tests/fixtures/ and instantiate them with Project::from_template:
tests/
└── fixtures/
└── my-fixture/
├── template.toml # declares variables (optional)
├── src/
│ └── main.rs
└── Cargo.toml
#[test]
fn check_on_fixture() {
let project = Project::from_template("tests/fixtures/my-fixture")
.build()
.unwrap();
let output = Scenario::new(MY_LINTER)
.project(&project)
.run()
.unwrap();
assert!(output.success());
}
Template files support Jinja2 variables. Declare them in template.toml:
[variables]
name = { description = "Package name", default = "my-package" }
Then pass values at build time: .var("name", "custom-name").
Interactive sessions
For commands with prompts or TUI menus, use .spawn() to get a Session. This requires Terminal::Pty.
use scenario::{Scenario, Terminal};
#[test]
fn interactive_init_accepts_defaults() {
let project = Project::empty().build().unwrap();
let mut session = Scenario::new(MY_LINTER)
.args(["init"])
.terminal(Terminal::pty(80, 24))
.project(&project)
.spawn()
.unwrap();
session.expect("Choose a preset:").unwrap();
session.send_line("default").unwrap();
session.expect("Created").unwrap();
let output = session.wait().unwrap();
assert!(output.success());
}
Key Session methods:
| Method | Description |
|---|---|
.expect(pattern) | Wait until output contains the string |
.expect_regex(pattern) | Wait until output matches a regex |
.expect_not(pattern) | Assert the pattern does NOT appear (500ms window) |
.send_line(text) | Type text and press Enter |
.send_key(Key::Down) | Send arrow keys, Ctrl+C, etc. |
.visible_screen() | Current terminal screen as a grid of lines |
.visible_text() | Current screen as newline-delimited text |
.wait() | Close stdin, wait for exit, return Output |
expect and expect_regex advance a cursor — each call only searches output that appeared after the previous match, so they naturally sequence into a script.
For TUI tools that redraw the screen in place (rather than streaming lines), use .expect_screen() / .visible_text() to inspect what’s actually visible rather than the raw output stream.
Ion.toml reference
When users install your binary skill, it appears in their Ion.toml like this:
[skills]
my-linter = { type = "binary", source = "owner/my-linter", binary = "my-linter" }
In dev mode:
[skills]
my-linter = { type = "binary", source = "owner/my-linter", binary = "my-linter", dev = "../my-linter" }