Hackathon Setup
Hackathon Setup
Section titled “Hackathon Setup”Running an AI hackathon? Gambi lets your team share LLM resources without everyone needing beefy hardware. Here’s how to get started in under 5 minutes.
The Scenario
Section titled “The Scenario”Your hackathon team has:
- Alice: Gaming laptop with RTX 4090, running Ollama with llama3
- Bob: MacBook Pro, no GPU
- Carol: Linux desktop with decent CPU, running Mistral via LM Studio
With Gambi, Bob can use Alice’s llama3 or Carol’s Mistral from his MacBook - no setup required on his end.
Quick Setup (5 Minutes)
Section titled “Quick Setup (5 Minutes)”Step 1: Pick a Hub Host
Section titled “Step 1: Pick a Hub Host”Choose one machine to run the hub. This can be any machine on the network - it doesn’t need a GPU since it just routes traffic.
# On the hub machine (e.g., Bob's MacBook)gambi serve --port 3000 --mdnsThe --mdns flag enables auto-discovery, so teammates don’t need to know the IP address.
Step 2: Create a Room
Section titled “Step 2: Create a Room”gambi create --name "Hackathon"# Output: Room created! Code: XK7P2MShare this code with your team (Slack, Discord, sticky note - whatever works).
Step 3: Join with Your LLMs
Section titled “Step 3: Join with Your LLMs”Each person with an LLM endpoint joins the room:
# Alice (Ollama)gambi join --code XK7P2M \ --endpoint http://localhost:11434 \ --model llama3 \ --nickname alice
# Carol (LM Studio)gambi join --code XK7P2M \ --endpoint http://localhost:1234 \ --model mistral \ --nickname carolIf the hub is running on a different machine, Gambi will automatically rewrite localhost endpoints to a LAN-reachable URL before publishing them to the hub. If your setup needs a custom published URL, pass --network-endpoint.
Step 4: Use from Your App
Section titled “Step 4: Use from Your App”Now everyone can use the shared LLMs:
import { createGambi } from "gambi-sdk";import { generateText } from "ai";
const gambi = createGambi({ roomCode: "XK7P2M", // Hub auto-discovered via mDNS, or specify: // hubUrl: "http://192.168.1.100:3000"});
// Use any available modelconst result = await generateText({ model: gambi.any(), prompt: "Generate a hackathon project idea",});To use Chat Completions instead of the default Responses API:
const gambi = createGambi({ roomCode: "XK7P2M", defaultProtocol: "chatCompletions",});Or skip the SDK entirely and use the API directly:
curl -X POST http://192.168.1.100:3000/rooms/XK7P2M/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model": "*", "messages": [{"role": "user", "content": "Hello!"}]}'See the API Reference for all endpoints.
Tips for Hackathons
Section titled “Tips for Hackathons”Use Nicknames
Section titled “Use Nicknames”Give meaningful nicknames so you know who’s who:
gambi join --code XK7P2M --nickname "alice-4090" --model llama3Target Specific Models
Section titled “Target Specific Models”If you need a specific model for a task:
// For code generation, use the faster modelconst code = await generateText({ model: gambi.model("llama3"), prompt: "Write a function to...",});
// For creative tasks, use the other modelconst story = await generateText({ model: gambi.model("mistral"), prompt: "Write a story about...",});Monitor with TUI
Section titled “Monitor with TUI”Keep a terminal open with the TUI to see who’s online and request activity:
cd apps/tuibun run dev XK7P2MHandle Disconnections
Section titled “Handle Disconnections”Laptops close, WiFi drops. Gambi handles this gracefully:
- Participants auto-reconnect when they come back
- Requests automatically route to available participants
- Use
gambi.any()for resilience
Share the Hub Load
Section titled “Share the Hub Load”The hub is lightweight, but if one machine is struggling:
- Anyone can run the hub (it doesn’t need GPU)
- The person with the most stable connection/power is a good choice
- Avoid running hub on the same machine as a heavy LLM
Troubleshooting
Section titled “Troubleshooting””No participants online”
Section titled “”No participants online””- Check if participants joined:
gambi list - Verify the room code is correct
- Make sure LLM endpoints are running (
curl http://localhost:11434/v1/models)
mDNS Not Working
Section titled “mDNS Not Working”Some networks block mDNS. Fall back to explicit IP:
const gambi = createGambi({ roomCode: "XK7P2M", hubUrl: "http://192.168.1.100:3000", // Hub machine's IP});Slow Responses
Section titled “Slow Responses”- LLMs are the bottleneck, not Gambi
- Consider which model to use for which task
- The person with the GPU should handle compute-heavy requests
Example: Hackathon Starter
Section titled “Example: Hackathon Starter”import { createGambi } from "gambi-sdk";
export const gambi = createGambi({ roomCode: process.env.GAMBI_ROOM!, hubUrl: process.env.GAMBI_HUB,});Use it anywhere in your app:
import { gambi } from "./lib/ai";import { generateText } from "ai";
const { text } = await generateText({ model: gambi.any(), prompt: "Generate a hackathon project idea",});See the SDK Reference for all routing methods and options.
What’s Next?
Section titled “What’s Next?”- Check the SDK Reference for all available methods
- See Architecture to understand how it works
- Read Troubleshooting if you hit issues
Good luck with your hackathon! 🚀