Home Lab Setup
Home Lab Setup
Section titled “Home Lab Setup”Want to share your gaming PC’s LLM with your laptop, tablet, or other devices? This guide covers setting up Gambi as a permanent service in your home lab.
The Scenario
Section titled “The Scenario”You have:
- Server: Desktop/gaming PC with GPU running Ollama 24/7
- Clients: Laptop, tablet, phone, or other devices that want to use the LLM
With Gambi, you can access your server’s LLM from anywhere on your home network.
Architecture
Section titled “Architecture”┌─────────────────────────────────────────────────────┐│ Home Network ││ ││ ┌─────────────┐ ┌─────────────────────┐ ││ │ Server │ │ Clients │ ││ │ (GPU Box) │◄───────►│ Laptop, Tablet... │ ││ │ │ │ │ ││ │ • Ollama │ │ • SDK apps │ ││ │ • Hub │ │ • Scripts │ ││ │ • Room │ │ • Notebooks │ ││ └─────────────┘ └─────────────────────┘ ││ │└─────────────────────────────────────────────────────┘Server Setup
Section titled “Server Setup”1. Install Gambi
Section titled “1. Install Gambi”curl -fsSL https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/install.sh | bash2. Create a Systemd Service
Section titled “2. Create a Systemd Service”Create /etc/systemd/system/gambi-hub.service:
[Unit]Description=Gambi HubAfter=network.target
[Service]Type=simpleUser=your-usernameExecStart=/usr/local/bin/gambi serve --port 3000 --mdnsRestart=alwaysRestartSec=10
[Install]WantedBy=multi-user.targetEnable and start:
sudo systemctl enable gambi-hubsudo systemctl start gambi-hub3. Create a Persistent Room
Section titled “3. Create a Persistent Room”You’ll want the same room code every time. Create a startup script:
#!/bin/bash# Wait for hub to be readysleep 5
# Create room (or use existing)ROOM_CODE=$(gambi create 2>/dev/null | grep -oP 'Code: \K\w+')echo "Room code: $ROOM_CODE"
# Join with local Ollamagambi join $ROOM_CODE \ --endpoint http://localhost:11434 \ --model llama3 \ --nickname homelab-gpuOr add another systemd service for the participant:
[Unit]Description=Gambi ParticipantAfter=gambi-hub.service ollama.serviceRequires=gambi-hub.service
[Service]Type=simpleUser=your-usernameExecStartPre=/bin/sleep 5ExecStart=/usr/local/bin/gambi join YOURCODE --endpoint http://localhost:11434 --model llama3 --nickname homelabRestart=alwaysRestartSec=10
[Install]WantedBy=multi-user.target4. Optional: Static Room Code
Section titled “4. Optional: Static Room Code”For a truly permanent setup, you can hardcode a room code. Check the hub logs for the generated code, or look at implementing room persistence (see Roadmap).
Client Setup
Section titled “Client Setup”From Another Machine
Section titled “From Another Machine”import { createGambi } from "gambi-sdk";import { generateText } from "ai";
const gambi = createGambi({ roomCode: "YOUR_ROOM_CODE", hubUrl: "http://192.168.1.100:3000", // Your server's IP});
const result = await generateText({ model: gambi.any(), prompt: "Hello from my laptop!",});Using mDNS
Section titled “Using mDNS”If mDNS works on your network (most home networks), you can skip the IP:
const gambi = createGambi({ roomCode: "YOUR_ROOM_CODE", // Hub discovered automatically via mDNS});From Scripts
Section titled “From Scripts”Simple shell script to test:
#!/bin/bashcurl -X POST http://192.168.1.100:3000/rooms/YOURCODE/v1/responses \ -H "Content-Type: application/json" \ -d '{ "model": "*", "input": "Hello!" }'Security Considerations
Section titled “Security Considerations”Home Network Only
Section titled “Home Network Only”Gambi is designed for trusted local networks. Don’t expose it to the internet without additional security measures.
Firewall Rules
Section titled “Firewall Rules”Allow traffic only from your local network:
# UFW examplesudo ufw allow from 192.168.1.0/24 to any port 3000Future: Password Protection
Section titled “Future: Password Protection”Room password protection is on the roadmap. For now, rely on network isolation.
Multiple LLMs
Section titled “Multiple LLMs”You can run multiple models on the same server:
# Terminal 1: Join with llama3gambi join YOURCODE \ --endpoint http://localhost:11434 \ --model llama3 \ --nickname homelab-llama
# Terminal 2: Join with mistral (same Ollama, different model)gambi join YOURCODE \ --endpoint http://localhost:11434 \ --model mistral \ --nickname homelab-mistralThen target specific models from clients:
// Use llama for codeconst code = await generateText({ model: gambi.model("llama3"), prompt: "Write a function...",});
// Use mistral for textconst text = await generateText({ model: gambi.model("mistral"), prompt: "Write an email...",});Monitoring
Section titled “Monitoring”Check Hub Status
Section titled “Check Hub Status”sudo systemctl status gambi-hubView Logs
Section titled “View Logs”sudo journalctl -u gambi-hub -fList Participants
Section titled “List Participants”gambi listTroubleshooting
Section titled “Troubleshooting”Service Won’t Start
Section titled “Service Won’t Start”- Check logs:
sudo journalctl -u gambi-hub -n 50 - Verify gambi is installed:
which gambi - Test manually:
gambi serve --port 3000
Can’t Connect from Client
Section titled “Can’t Connect from Client”- Check server firewall
- Verify IP address:
ip addron server - Test connectivity:
ping 192.168.1.100from client - Test port:
nc -zv 192.168.1.100 3000
Ollama Not Responding
Section titled “Ollama Not Responding”- Check Ollama is running:
curl http://localhost:11434/v1/models - Verify model is pulled:
ollama list - Check Ollama logs
What’s Next?
Section titled “What’s Next?”- Add more models as participants
- Set up monitoring with TUI
- Check Troubleshooting for common issues
- See the Architecture to understand how it all works