Complete bundle:
- Local model with Ollama (
qwen2:7b-instruct
) - Tool-based reasoning via LangChain
- REST API via FastAPI
- Open WebUI via Docker
- Clean structure, ready to run
brew install ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
ollama pull qwen2:7b-instruct
Create a virtual environment:
python3 -m venv .venv
source .venv/bin/activate
Install requirements:
pip install -r requirements.txt
uvicorn app.main:app --reload
Access:
http://localhost:8000/chat
Test with:
curl -X POST http://localhost:8081/chat -H "Content-Type: application/json" -d '{"prompt": "weather in Paris and what is 23*11"}'
docker run -d \
--name=openwebui \
-p 3000:8080 \
-v open-webui:/app/backend/data \
--add-host=host.docker.internal:host-gateway \
ghcr.io/open-webui/open-webui:main
Then open:
http://localhost:3000
ollama_langchain_webui_bundle/
├── app/
│ ├── main.py
│ ├── agent.py
│ └── tools.py
├── requirements.txt
├── README.md
MIT Licensed.