Skip to content
forked from karpathy/reader3

Quick illustration of how one can easily read books together with LLMs. It's great and I highly recommend it.

Notifications You must be signed in to change notification settings

ohmyscott/reader3

 
 

Repository files navigation

📚 Reader3 - Intelligent EPUB Reader with AI Assistant

Reader3 Banner License Python Docker Multi-AI

Demo Discord Documentation

A modern, self-hosted EPUB reader that combines intelligent AI assistance with local privacy protection.

✨ Key Features

📖 Advanced Reading Experience

  • Complete EPUB Support: Full compatibility with EPUB 2.0 and 3.0 standards
  • Chapter Navigation: Intuitive table of contents with progress tracking
  • Image Rendering: High-quality image display within chapters
  • Responsive Design: Optimized for desktop, tablet, and mobile devices
  • Dark Mode: Eye-friendly reading mode for low-light environments

🤖 Multi-Provider AI Integration

  • OpenAI: Full API integration with GPT models
  • LM Studio: Local AI model support with automatic configuration
  • Ollama: Complete local LLM integration for offline usage
  • Provider Switching: Seamless switching between AI providers
  • Privacy Protection: Local processing options for sensitive content

🛠️ Developer-Friendly

  • Modern Tech Stack: FastAPI + Alpine.js + TailwindCSS
  • Docker Support: Complete containerization with Docker Compose
  • API-First Design: RESTful APIs for easy integration
  • Extensible Architecture: Plugin-ready system for custom features
  • Comprehensive Tooling: Development and deployment utilities

🌐 Internationalization

  • Multi-Language Support: English and Simplified Chinese
  • RTL Text Support: Right-to-left language compatibility
  • Localized UI: Complete interface translation
  • Dynamic Language Switching: Runtime language changes

🚀 Quick Start

🐳 Docker Compose (Recommended)

# Clone the repository
git clone https://github.com/ohmyscott/reader3.git
cd reader3

# Configure environment
cp .env.example .env
# Edit .env with your AI provider settings

# Start the application
docker compose up -d

# Access the application
open http://localhost:8123

💻 Local Development

# Prerequisites
Python 3.10+
Node.js 16+ (for frontend development)

# Install dependencies
pip install uv
uv sync

# Start the server
uv run python server.py

# Or use the operations script
./ops.sh dev start

🏗️ Architecture

┌─────────────────┐
│   User Interface │
└─────────┬───────┘
          │
          ▼
┌─────────────────────────────────────────────────────────────┐
│                 Frontend (Alpine.js + TailwindCSS)         │
└─────────────────┬───────────────────────────────────────┘
                  │
                  ▼
┌─────────────────────────────────────────────────────────────┐
│                    FastAPI Backend                      │
└─────────────────┬───────────────────────────────────────────┘
                  │
    ┌─────────────┼─────────────┬─────────────┬─────────────┐
    │             │             │             │             │
    ▼             ▼             ▼             ▼             ▼
┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐
│EPUB    │  │  AI     │  │ Provider │  │TinyDB   │  │ File   │
│Parser  │  │ Service │  │Abstraction│  │Storage  │  │System  │
└─────────┘  └─────────┘  └─────────┘  └─────────┘  └─────────┘
                  │             │             │             │
                  ▼             ▼             ▼             ▼
            ┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐
            │OpenAI  │  │LM Studio│  │ Ollama   │
            │API    │  │   API   │  │   API   │
            └─────────┘  └─────────┘  └─────────┘  └─────────┘

⚙️ Configuration

Environment Variables

Create a .env file with your preferred AI provider configuration:

# AI Provider Selection (openai, lmstudio, ollama)
AI_PROVIDER=ollama

# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_MODEL=gpt-4o-mini

# LM Studio Configuration
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=your_local_model

# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=llama3.1:8b

# Application Settings
PORT=8123
HOST=0.0.0.0
BOOKS_DIR=./books
UPLOAD_DIR=./uploads

Provider Configuration

Provider API Key Required Base URL Privacy Cost
OpenAI Yes api.openai.com Cloud Pay-per-token
LM Studio No localhost:1234 Local Free
Ollama No localhost:11434 Local Free

📱 Screenshots

📚 Library View

Library View

📖 Reading Interface

Reading Interface

🤖 AI Assistant

AI Assistant

⚙️ Settings

Settings

🛠️ Development

Project Structure

reader3/
├── 📁 frontend/                 # Frontend application
│   ├── 📄 index.html           # Main application page
│   ├── 📄 reader.html          # Reader interface
│   ├── 📁 css/                  # Stylesheets
│   ├── 📁 js/                   # JavaScript modules
│   └── 📁 locales/              # Internationalization
├── 📁 api/                     # API modules
├── 📁 data/                    # Data storage
├── 📁 docs/                    # Documentation
├── 🐳 docker-compose.yml       # Docker configuration
├── 🐳 Dockerfile               # Container definition
├── 📄 server.py                # FastAPI application
├── 📄 reader3.py               # EPUB processing utility
├── 📄 ops.sh                   # Operations script
└── 📄 requirements.txt         # Python dependencies

Development Workflow

# Start development server
./ops.sh dev start

# Check service status
./ops.sh dev ps

# Stop development server
./ops.sh dev stop

🐳 Docker Deployment

Production Deployment

We recommend using the operations script for production deployment:

# Quick production setup
./ops.sh prod start

# Or use Docker Compose directly
docker-compose -f docker-compose.prod.yml up -d

# Scale the application
docker-compose -f docker-compose.prod.yml up -d --scale reader3=3

# Check production status
./ops.sh prod ps

📊 Performance

Benchmarks

Metric Value
Startup Time < 2s
Memory Usage < 512MB (base)
Book Processing < 5s per 1000 chapters
Concurrent Users 100+
API Response Time < 500ms (local AI)

System Requirements

Minimum:

  • CPU: 2 cores
  • RAM: 4GB
  • Storage: 10GB
  • OS: Linux/macOS/Windows

Recommended:

  • CPU: 4 cores
  • RAM: 8GB
  • Storage: 50GB SSD
  • OS: Linux with Docker

🔧 Operations

Management Commands

# Application management (development)
./ops.sh dev start      # Start development server
./ops.sh dev stop       # Stop development server
./ops.sh dev restart    # Restart development server
./ops.sh dev ps         # Check service status

# Application management (production)
./ops.sh prod start     # Start production server
./ops.sh prod stop      # Stop production server
./ops.sh prod restart   # Restart production server
./ops.sh prod ps        # Check production status

# Build production images
./ops.sh prod build     # Build Docker images

# File management
./ops.sh ls             # Show EPUB statistics
./ops.sh clean lru      # Clean old files
./ops.sh clean lru 5    # Keep 5 most recent files

🧪 Testing

Test Suite

# Testing is not yet implemented (TODO)
# Future testing capabilities will include:
# - Unit tests
# - Integration tests
# - End-to-end tests
# - Test coverage reporting

Manual Testing

# Manual testing can be performed by:
# - Uploading and reading EPUB files
# - Testing AI assistant functionality
# - Verifying provider switching
# - Checking responsive design

🤝 Contributing

We welcome contributions! Please read our Contributing Guidelines for details.

Development Process

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

Code Standards

  • Follow PEP 8 for Python code
  • Write comprehensive tests for new features
  • Update documentation for API changes

Issue Reporting

  • Use the issue template for bugs
  • Provide detailed reproduction steps
  • Include system information and logs

📚 Documentation

🔒 Security

Security Features

  • Local AI Options: Process sensitive content locally
  • API Key Protection: Secure storage and masking
  • Input Validation: Comprehensive input sanitization
  • CORS Configuration: Proper cross-origin settings
  • Rate Limiting: API request throttling

Security Reporting

Please report security issues privately to [email protected]

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

📞 Support


Made with ❤️ for readers who love AI assistance

Back to top

About

Quick illustration of how one can easily read books together with LLMs. It's great and I highly recommend it.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 30.7%
  • JavaScript 30.3%
  • HTML 20.5%
  • Shell 10.1%
  • CSS 8.0%
  • Dockerfile 0.4%