Skip to content

kndeepak/LLM-RAG-invoice-Local_CPU

Repository files navigation

Invoice data processing with Llama2 13B LLM RAG on Local CPU

Quickstart

RAG runs on: LlamaCPP, Haystack, Weaviate

  1. Download the Llama2 13B model, check models/model_download.txt for the download link.
  2. Install Weaviate local DB with Docker

docker compose up -d

  1. Install the requirements:

pip install -r requirements.txt

  1. Copy text PDF files to the data folder.
  2. Run the script, to convert text to vector embeddings and save in Weaviate vector storage:

python ingest.py

  1. Run the script, to process data with Llama2 13B LLM RAG and return the answer:

python main.py "What is the invoice number value?"

FASTAPI Server Implementation

  1. Load Server using FAST API

uvicorn main:app --reload

8.Run query in this format

http://127.0.0.1:8000/get_answer/?query=YourQuestionHere

About

Invoice data processing with Llama2 13B LLM RAG on Local CPU

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages