Skip to content
@Lexsi-Labs

Lexsi.ai

Aligned and safe AI

https://www.lexsi.ai

Paris 🇫🇷 · Mumbai 🇮🇳 · London 🇬🇧

Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.

Research Focus

  • Aligned & Safe AI: Frameworks for self-monitoring, interpretable, and alignment-aware systems.
  • Explainability & Alignment: Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.
  • Safe Behaviour Control: Techniques for fine-tuning, pruning, and behavioural steering in large models.
  • Risk & Governance: Continuous monitoring, drift detection, and fairness auditing for responsible deployment.
  • Tabular & LLM Research: Foundational work on tabular intelligence, in-context learning, and interpretable large language models.

Popular repositories Loading

  1. TabTune TabTune Public

    TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models

    Python 50 3

  2. Orion-MSP Orion-MSP Public

    Python 24 3

  3. DLBacktrace DLBacktrace Public

    DL Backtrace is a new explainablity technique for deep learning models that works for any modality and model type.

    Python 17 3

  4. xai_evals xai_evals Public

    Evaluation Matrices for Explainability Methods

    Python 12 1

  5. Orion-BiX Orion-BiX Public

    Python 8 1

  6. AryaXAI-SDK AryaXAI-SDK Public

    sdk for AryaXai components

    Python 4

Repositories

Showing 9 of 9 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Most used topics

Loading…