feat: initial release - complete web chat interface with image generation
Implemented a modern web-based chat interface for multiple LLM backends
(LM Studio, Ollama, OpenRouter) with AI image generation via ComfyUI.

Features:
- Multi-server support with presets (Ollama, LM Studio, OpenRouter, Custom)
- API key management for cloud services
- Chat with streaming responses
- Image generation via /genimg command
- Lightbox for image zoom
- Code syntax highlighting with copy buttons
- LaTeX math rendering
- Multiple chat conversations with persistence
- Responsive mobile-friendly design
- Purple theme with smooth animations

Backend:
- FastAPI MCP server (port 8085) for ComfyUI integration
- Seed randomization to prevent cache duplicates
- Comprehensive logging
- Startup/stop scripts (start.sh, stop.sh)

Technical:
- Frontend: vanilla JS, HTML5, CSS3
- Backend: Python 3.8+, FastAPI, requests
- CORS configured for local development
- Works with local servers (localhost:11434, :1234) and cloud APIs

Setup:
1. Start ComfyUI on port 8188
2. Run ./start.sh
3. Open http://localhost:8084
4. Connect to your LLM and start chatting

Docs: README.md, PROGRESS.md
1 parent 9773913 commit 18ebfe92264857afb17decd9d39b077769d625e1
@Eric Chicoine Eric Chicoine authored 15 days ago
Showing 12 changed files
View
.gitignore 0 → 100644
View
PROGRESS.md 0 → 100644
View
README.md
View
backend/Jaugernaut_wrkf.json 0 → 100644
View
backend/mcp_server.py 0 → 100644
View
backend/uvicorn_cmd.txt 0 → 100644
View
docs/superpowers/plans/2026-03-26-image-generation-comfyui.md 0 → 100644
View
index.html 0 → 100644
View
main.js 0 → 100644
View
start.sh 0 → 100755
View
stop.sh 0 → 100755
View
styles.css 0 → 100644