gpu.aiDocs
UPDATED 2026.05.15READ 6 MINEDIT ON GITHUB →
CH·02CLI

The gpu CLI.

gpu is a single binary that wraps the public REST API and adds one killer command: gpu instances ssh <id> resolves the FRP tunnel for you and execs ssh — no more remembering port numbers and StrictHostKeyChecking flags.

§ 02.1Install

The CLI ships for darwin and linux on amd64 and arm64. Pick whichever path fits your workflow.

Homebrew (macOS + Linux)

brew install gpuai-dev/tap/gpu

curl | sh

curl https://install.demo.gpu.ai/install | sh

The script detects your OS and arch, downloads the matching archive from GitHub Releases, verifies SHA-256, and installs to /usr/local/bin/gpu (override with INSTALL_DIR=$HOME/.local/bin).

Manual download

Grab the tarball for your platform from github.com/gpuai-dev/gpu-cli/releases (look for v* tags), extract, place gpu on your $PATH, verify checksums against checksums.txt.

§ 02.2Authenticate

Every command except --help and --version requires a gpuai_live_* API key. The CLI resolves credentials in this order:

  1. GPUAI_API_KEY environment variable — the recommended path for CI, scripts, and the current private beta.
  2. ~/.config/gpu/credentials.json — written by gpu login after the prod GA cutover.
  3. Legacy ~/.config/gpuctl/config.toml — back-compat with the older gpuctl auth login flow.
SHELL
export GPUAI_API_KEY=gpuai_live_xxxxxxxxxxxxxxxxxxxxxxxx
gpu instances list

§ 02.3First commands

Browse what's available before launching anything.

SHELL
# Every supported GPU type
gpu gpu-types list

# Real-time pricing across providers and regions
gpu pricing

# Filter to a specific GPU type
gpu pricing --gpu-type h100_sxm

# Your existing SSH keys (you'll need at least one before launching)
gpu keys list

If you don't have an SSH key registered yet, see chapter 06 for how to generate one and add it via gpu keys create.

§ 02.4Launch and connect

The end-to-end loop: provision, ssh in, do your work, terminate.

SHELL
# Launch — polls the operation FSM until status=running (~5 sec on demo)
gpu instances create \
  --type h100_sxm \
  --tier on_demand \
  --ssh-key-id <your-key-id> \
  --name training-run

# Connect — resolves the FRP tunnel (host + port) from the API and execs ssh
gpu instances ssh <instance-id>

# Forward a local port (anything after -- is passed through to ssh)
gpu instances ssh <instance-id> -- -L 8888:localhost:8888

# Print the equivalent ssh command without exec'ing (useful for ssh configs)
gpu instances ssh <instance-id> --print

# Terminate when you're done
gpu instances delete <instance-id>

§ 02.5Output formats

The default is table on a TTY and json when stdout is piped — so gpu instances list | jq just works. Override with the global --output flag.

SHELL
gpu instances list                    # table (TTY) or JSON (pipe)
gpu instances list --output json      # always JSON
gpu instances list --output table     # always table

# Pull just the IDs of running instances
gpu instances list --output json | jq -r '.[] | select(.status == "running") | .id'

§ 02.6Environment variables

VariableDefaultPurpose
GPUAI_API_KEYAPI key for authenticated requests.
GPUAI_API_BASEhttps://api.gpu.ai/v1Override the API host. Useful for staging or self-hosted backends.
GPU_LOGIN_BASEhttps://gpu.aiDashboard host for the device-flow approval URL.
GPU_BROWSER_DISABLEDSet to 1 to skip auto-opening the browser on `gpu login`.
XDG_CONFIG_HOME~/.configOverride the directory that holds credentials.json.

§ 02.7Reference and source

Per-command pages with full flag tables live under docs/cli/ in the source repo (mirror of gpu <cmd> --help). Release notes and breaking changes go in the CHANGELOG.

Found a bug or missing flag? File an issue — CLI source lives at cmd/gpu/ with the typed REST client at internal/apiclient/ and the formatters at internal/cliprint/.