Skip to content

Hackathon Setup

Running an AI hackathon? Gambi lets your team share LLM resources without everyone needing beefy hardware. Here’s how to get started in under 5 minutes.

Your hackathon team has:

  • Alice: Gaming laptop with RTX 4090, running Ollama with llama3
  • Bob: MacBook Pro, no GPU
  • Carol: Linux desktop with decent CPU, running Mistral via LM Studio

With Gambi, Bob can use Alice’s llama3 or Carol’s Mistral from his MacBook - no setup required on his end.

Choose one machine to run the hub. This can be any machine on the network - it doesn’t need a GPU since it just routes traffic.

Terminal window
# On the hub machine (e.g., Bob's MacBook)
gambi serve --port 3000 --mdns

The --mdns flag enables auto-discovery, so teammates don’t need to know the IP address.

Terminal window
gambi create --name "Hackathon"
# Output: Room created! Code: XK7P2M

Share this code with your team (Slack, Discord, sticky note - whatever works).

Each person with an LLM endpoint joins the room:

Terminal window
# Alice (Ollama)
gambi join --code XK7P2M \
--endpoint http://localhost:11434 \
--model llama3 \
--nickname alice
# Carol (LM Studio)
gambi join --code XK7P2M \
--endpoint http://localhost:1234 \
--model mistral \
--nickname carol

If the hub is running on a different machine, Gambi will automatically rewrite localhost endpoints to a LAN-reachable URL before publishing them to the hub. If your setup needs a custom published URL, pass --network-endpoint.

Now everyone can use the shared LLMs:

import { createGambi } from "gambi-sdk";
import { generateText } from "ai";
const gambi = createGambi({
roomCode: "XK7P2M",
// Hub auto-discovered via mDNS, or specify:
// hubUrl: "http://192.168.1.100:3000"
});
// Use any available model
const result = await generateText({
model: gambi.any(),
prompt: "Generate a hackathon project idea",
});

To use Chat Completions instead of the default Responses API:

const gambi = createGambi({
roomCode: "XK7P2M",
defaultProtocol: "chatCompletions",
});

Or skip the SDK entirely and use the API directly:

Terminal window
curl -X POST http://192.168.1.100:3000/rooms/XK7P2M/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "*", "messages": [{"role": "user", "content": "Hello!"}]}'

See the API Reference for all endpoints.

Give meaningful nicknames so you know who’s who:

Terminal window
gambi join --code XK7P2M --nickname "alice-4090" --model llama3

If you need a specific model for a task:

// For code generation, use the faster model
const code = await generateText({
model: gambi.model("llama3"),
prompt: "Write a function to...",
});
// For creative tasks, use the other model
const story = await generateText({
model: gambi.model("mistral"),
prompt: "Write a story about...",
});

Keep a terminal open with the TUI to see who’s online and request activity:

Terminal window
cd apps/tui
bun run dev XK7P2M

Laptops close, WiFi drops. Gambi handles this gracefully:

  • Participants auto-reconnect when they come back
  • Requests automatically route to available participants
  • Use gambi.any() for resilience

The hub is lightweight, but if one machine is struggling:

  • Anyone can run the hub (it doesn’t need GPU)
  • The person with the most stable connection/power is a good choice
  • Avoid running hub on the same machine as a heavy LLM
  1. Check if participants joined: gambi list
  2. Verify the room code is correct
  3. Make sure LLM endpoints are running (curl http://localhost:11434/v1/models)

Some networks block mDNS. Fall back to explicit IP:

const gambi = createGambi({
roomCode: "XK7P2M",
hubUrl: "http://192.168.1.100:3000", // Hub machine's IP
});
  • LLMs are the bottleneck, not Gambi
  • Consider which model to use for which task
  • The person with the GPU should handle compute-heavy requests
lib/ai.ts
import { createGambi } from "gambi-sdk";
export const gambi = createGambi({
roomCode: process.env.GAMBI_ROOM!,
hubUrl: process.env.GAMBI_HUB,
});

Use it anywhere in your app:

import { gambi } from "./lib/ai";
import { generateText } from "ai";
const { text } = await generateText({
model: gambi.any(),
prompt: "Generate a hackathon project idea",
});

See the SDK Reference for all routing methods and options.

Good luck with your hackathon! 🚀