Skip to content

letItCurl/local-rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Local RAG Stack

A complete Retrieval-Augmented Generation (RAG) setup running locally with Docker Compose.

Architecture

  • Ollama - Local embeddings using mxbai-embed-large
  • PostgreSQL + pgvector - Vector database for semantic storage
  • Flask Web App - Simple interface to store and search documents
  • Docker Compose - Orchestrates all services

Quick Start

Prerequisites

  • Docker & Docker Compose installed
  • At least 4GB free disk space (for models and data)

1. Start the Stack

docker compose up --build

2. Access the Web Interface

Open your browser to: http://localhost:8000

3. (Optional) Preload the Embedding Model

docker exec -it rag_ollama ollama pull mxbai-embed-large

Ports

  • 8000: Web application
  • 11434: Ollama API
  • 5432: PostgreSQL database

Database Schema

CREATE TABLE documents (
    id SERIAL PRIMARY KEY,
    text TEXT,
    embedding VECTOR(1024)  -- mxbai-embed-large produces 1024-dim vectors
);

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published