Skip to content

Conversation

axc888
Copy link

@axc888 axc888 commented Sep 28, 2025

Description

This PR adds a comprehensive MLflow integration pipeline for Open WebUI that enables real-time conversation tracking, analytics, and performance monitoring. The pipeline automatically logs all user-AI interactions to MLflow with detailed metrics and artifacts.

Features Added

  • Real-time Conversation Tracking: Automatically logs all user-AI interactions to MLflow
  • Comprehensive Metrics: Tracks tokens, response times, message lengths, and interaction counts
  • Flexible Run Management: Choose between per-conversation or per-interaction tracking modes
  • User Analytics: Monitor user behavior patterns and usage statistics
  • Model Performance Tracking: Track model response times and token efficiency
  • Artifact Storage: Save conversation history and individual messages as MLflow artifacts
  • Rich Metadata: Extensive tagging and parameter tracking for easy filtering and analysis
  • Robust Error Handling: Graceful degradation when MLflow server is unavailable

Type of Change

  • New feature

Core Functionality

  • Filter Pipeline: Implements both inlet() and outlet() methods to capture complete request/response cycle
  • MLflow Integration: Creates experiments and runs with comprehensive logging
  • Dual Tracking Modes:
    • Per-conversation: One MLflow run per chat session
    • Per-interaction: Separate MLflow run for each user-AI exchange
  • Comprehensive Data Collection:
    • User inputs and AI responses as artifacts
    • Token usage metrics (input/output/total tokens)
    • Response timing measurements
    • Message length analytics
    • Model information and metadata

Configuration Options

The pipeline supports extensive configuration via environment variables and valves:

Environment Variable Description Default
MLFLOW_TRACKING_URI MLflow server URL http://localhost:5000
MLFLOW_EXPERIMENT_NAME Experiment name open-webui-experiments
SEPARATE_RUNS Per-interaction tracking false
USE_MODEL_NAME Use model name vs ID false
DEBUG_MODE Enable debug logging false

Data Structure

Tags: source, interface, user_id, chat_id, run_type, status, total_interactions

Parameters: model_id, model_name, user_email, chat_id, interface, task_type

Metrics: user_message_length, assistant_message_length, response_time, input_tokens, output_tokens, total_tokens

Artifacts: User inputs, AI responses, conversation history (JSON)

Requirements

  • mlflow>=2.0.0
  • requests>=2.25.0
  • MLflow server (local or remote)

Usage Examples

Basic Setup

1. Install MLflow (version 2.0.0 or higher)

pip install "mlflow>=2.0.0"

2. Start the MLflow tracking server

mlflow server --host 0.0.0.0 --port 5000

3. Configure Open WebUI (or any client) to use MLflow

export MLFLOW_TRACKING_URI=http://localhost:5000
export MLFLOW_EXPERIMENT_NAME=my-conversations

Screenshots

mlflow_openwebui_integration1 openwebui_pipeline

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant