Public API.
GPU.ai exposes a JSON REST API for everything you can do in the dashboard: list GPU types, provision instances, manage SSH keys and webhooks, and pull usage data. Authenticate with a Bearer API key, point at https://api.demo.gpu.ai/v1, and you're live.
§ 01.1Get an API key¶
Keys have the shape gpuai_live_<24 chars> and are shown exactly once at creation. We only store a SHA-256 hash — there is no recovery if you lose the raw key.
From the dashboard
Sign in, open API Keys in the cloud dashboard, and click Create key. Copy the raw key from the modal before closing it.
From the CLI
Install gpuctl and run gpuctl auth login. The device flow mints a key and writes it to ~/.config/gpuctl/config.toml with mode 0600.
§ 01.2Hello, world (no auth)¶
The catalog endpoints are public — no API key required. List every GPU type in the marketplace:
curl https://api.demo.gpu.ai/v1/gpu-typesResponses are JSON envelopes with a data array and a next_cursor field — see the pagination conventions.
§ 01.3Provision your first instance¶
Writes need a Bearer key and an Idempotency-Key header. Instance creation is async — the response is 202 Accepted with an Operation-Id you can poll at GET /v1/operations/{id}.
curl -X POST https://api.demo.gpu.ai/v1/instances \
-H "Authorization: Bearer gpuai_live_..." \
-H "Content-Type: application/json" \
-H "Idempotency-Key: $(uuidgen)" \
-d '{
"gpu_type": "h100_sxm",
"gpu_count": 1,
"tier": "on_demand",
"ssh_key_ids": ["sshkey_01HX..."]
}'§ 01.4OpenAPI spec¶
The full OpenAPI 3.1 spec lives at api.demo.gpu.ai/v1/openapi.json. Pipe it into Postman, Stoplight, or any OpenAPI tool to explore interactively.