Skip to content

kanthgithub/ollama-webui-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

🧠 Ollama + Qwen + LangChain + Open WebUI (Docker + FastAPI)

Complete bundle:

  • Local model with Ollama (qwen2:7b-instruct)
  • Tool-based reasoning via LangChain
  • REST API via FastAPI
  • Open WebUI via Docker
  • Clean structure, ready to run

📦 Setup Instructions

1. Install Ollama

macOS:

brew install ollama

Linux:

curl -fsSL https://ollama.com/install.sh | sh

2. Start Ollama Server

ollama serve

3. Pull Model

ollama pull qwen2:7b-instruct

4. Install Python Dependencies

Create a virtual environment:

python3 -m venv .venv
source .venv/bin/activate

Install requirements:

pip install -r requirements.txt

5. Run FastAPI Server

uvicorn app.main:app --reload

Access:

http://localhost:8000/chat

Test with:

curl -X POST http://localhost:8081/chat -H "Content-Type: application/json" -d '{"prompt": "weather in Paris and what is 23*11"}'

6. Start Open WebUI (Docker)

docker run -d \
  --name=openwebui \
  -p 3000:8080 \
  -v open-webui:/app/backend/data \
  --add-host=host.docker.internal:host-gateway \
  ghcr.io/open-webui/open-webui:main

Then open:

http://localhost:3000

File Structure

ollama_langchain_webui_bundle/
├── app/
│   ├── main.py
│   ├── agent.py
│   └── tools.py
├── requirements.txt
├── README.md

MIT Licensed.

About

instructions to setup ollama local serve with open-web-ui as client and langchain as agent

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages