Skip to content

CLI Reference

The Gambi CLI provides commands for managing hubs, rooms, and participants.

All commands support interactive mode — run without flags in a terminal and you’ll be guided through each option step by step. Flags still work for scripting and automation.

If you’re coming from the old package and binary names, read Migrate from Gambiarra.

Terminal window
curl -fsSL https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/install.sh | bash
Terminal window
curl -fsSL https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/uninstall.sh | bash

When you run any command without its required flags in a terminal (TTY), the CLI enters interactive mode and prompts you for each option:

┌ gambi join
◇ Room code:
│ ABC123
◆ LLM Provider:
│ ● Ollama (localhost:11434)
│ ○ LM Studio (localhost:1234)
│ ○ vLLM (localhost:8000)
│ ○ Custom URL
◇ Select model:
│ llama3.2
└ Joined room ABC123!

Interactive mode is disabled when piping input (echo "x" | gambi create), so scripts work as before.

Start a hub server.

Terminal window
# Interactive — prompts for port, host, mDNS:
gambi serve
# With flags:
gambi serve [options]

Options:

OptionDescriptionDefault
--port, -pPort to listen on3000
--host, -hHost to bind to0.0.0.0
--mdns, -mEnable mDNS auto-discoveryfalse
--quiet, -qSuppress logo outputfalse

Example:

Terminal window
gambi serve --port 3000 --mdns

Create a new room on a hub.

Terminal window
# Interactive — prompts for name and password:
gambi create
# With flags:
gambi create --name "Room Name" [options]

Options:

OptionDescriptionDefault
--name, -nRoom nameRequired (prompted in interactive mode)
--password, -pPassword to protect the roomNone
--hub, -HHub URLhttp://localhost:3000

Examples:

Terminal window
# Create a room interactively
gambi create
# Create with flags
gambi create --name "My Room"
# Create on a custom hub
gambi create --name "My Room" --hub http://192.168.1.10:3000
# Create a password-protected room
gambi create --name "My Room" --password secret123

Join a room and expose your LLM endpoint.

Terminal window
# Interactive — select provider, model, set nickname:
gambi join
# With flags:
gambi join --code <room-code> --model <model> [options]

Options:

OptionDescriptionDefault
--code, -cRoom code to joinRequired (prompted in interactive mode)
--model, -mModel to exposeRequired (prompted in interactive mode)
--endpoint, -eLocal LLM endpoint URL used for probing and inferencehttp://localhost:11434
--network-endpointNetwork-reachable URL to publish to the hubAuto-detected when needed
--nickname, -nDisplay nameAuto-generated
--headerAuth header in the format Header=ValueNone
--header-envAuth header in the format Header=ENV_VARNone
--password, -pRoom password (if protected)None
--hub, -HHub URLhttp://localhost:3000
--no-specsDon’t share machine specsfalse
--no-network-rewriteDisable automatic localhost-to-LAN rewrite for remote hubsfalse

The CLI automatically probes your local endpoint to detect available models and protocol capabilities (Responses API vs Chat Completions).

When the hub is remote and your local endpoint is loopback-only (for example http://localhost:11434), Gambi tries to publish a LAN-reachable URL automatically. In interactive mode, the CLI explains the rewrite and lets you confirm or override it. Use --network-endpoint when you want to publish a specific URL yourself.

In interactive mode, you’ll select your LLM provider from a list (Ollama, LM Studio, vLLM, or custom URL), optionally add auth headers, and then choose from the detected models.

Examples:

Terminal window
# Join interactively — guided through all options
gambi join
# Join with Ollama
gambi join --code ABC123 --model llama3
# Join with LM Studio
gambi join --code ABC123 \
--model mistral \
--endpoint http://localhost:1234
# Join a remote hub and publish an explicit LAN URL
gambi join --code ABC123 \
--hub http://192.168.1.10:3000 \
--model llama3 \
--endpoint http://localhost:11434 \
--network-endpoint http://192.168.1.25:11434
# Join with custom nickname
gambi join --code ABC123 \
--model llama3 \
--nickname "alice-4090"
# Join a remote provider securely
export OPENROUTER_AUTH="Bearer sk-or-..."
gambi join --code ABC123 \
--model meta-llama/llama-3.1-8b-instruct:free \
--endpoint https://openrouter.ai/api \
--header-env Authorization=OPENROUTER_AUTH
# Join a password-protected room
gambi join --code ABC123 \
--model llama3 \
--password secret123

List available rooms on a hub.

Terminal window
# Interactive — prompts for hub URL and output format:
gambi list
# With flags:
gambi list [options]

Options:

OptionDescriptionDefault
--hub, -HHub URLhttp://localhost:3000
--json, -jOutput as JSONfalse

Example:

Terminal window
gambi list
# Output:
# Available rooms:
# ABC123 My Room
# Participants: 3
# XYZ789 Test Room
# Participants: 1
gambi list --json

Open the TUI to monitor rooms in real-time.

Terminal window
gambi monitor [options]

Options:

OptionDescriptionDefault
--hub, -HHub URLhttp://localhost:3000

The monitor shows participants, their status (online/offline), and a live activity log of events (joins, requests, errors) via SSE.

Tip: The standalone CLI currently shows help when you run gambi without arguments. Use gambi monitor to open the TUI.

Gambi works with any endpoint that exposes OpenResponses or OpenAI-compatible chat/completions:

ProviderDefault EndpointProtocols
Ollamahttp://localhost:11434Responses API, Chat Completions
LM Studiohttp://localhost:1234Responses API, Chat Completions
LocalAIhttp://localhost:8080Responses API, Chat Completions
vLLMhttp://localhost:8000Responses API, Chat Completions

For cloud providers (OpenRouter, Together AI, Groq, etc.), see the Remote Providers guide.