|
feat: initial release - complete web chat interface with image generation
Implemented a modern web-based chat interface for multiple LLM backends (LM Studio, Ollama, OpenRouter) with AI image generation via ComfyUI. Features: - Multi-server support with presets (Ollama, LM Studio, OpenRouter, Custom) - API key management for cloud services - Chat with streaming responses - Image generation via /genimg command - Lightbox for image zoom - Code syntax highlighting with copy buttons - LaTeX math rendering - Multiple chat conversations with persistence - Responsive mobile-friendly design - Purple theme with smooth animations Backend: - FastAPI MCP server (port 8085) for ComfyUI integration - Seed randomization to prevent cache duplicates - Comprehensive logging - Startup/stop scripts (start.sh, stop.sh) Technical: - Frontend: vanilla JS, HTML5, CSS3 - Backend: Python 3.8+, FastAPI, requests - CORS configured for local development - Works with local servers (localhost:11434, :1234) and cloud APIs Setup: 1. Start ComfyUI on port 8188 2. Run ./start.sh 3. Open http://localhost:8084 4. Connect to your LLM and start chatting Docs: README.md, PROGRESS.md |
|---|
|
|
| .gitignore 0 → 100644 |
|---|
| PROGRESS.md 0 → 100644 |
|---|
| README.md |
|---|
| backend/Jaugernaut_wrkf.json 0 → 100644 |
|---|
| backend/mcp_server.py 0 → 100644 |
|---|
| backend/uvicorn_cmd.txt 0 → 100644 |
|---|
| docs/superpowers/plans/2026-03-26-image-generation-comfyui.md 0 → 100644 |
|---|
| index.html 0 → 100644 |
|---|
| main.js 0 → 100644 |
|---|
| start.sh 0 → 100755 |
|---|
| stop.sh 0 → 100755 |
|---|
| styles.css 0 → 100644 |
|---|