Documentation: DSPy Docs
This is a fork of DSPy that has been modified to be fully async. Underlying behavior is untouched, with the exception of global per-thread settings overrides. The goal of this fork is to maintain parity and release cadence with DSPy (which is not complex or a large time-sink, given that the vast majority of changes are spamming async/await on various methods). It aims to be a near-drop-in replacement for dspy.
The high-level changes are as follows:
- Calls to tools, metrics, and modules must be
awaited
- Implementations of tools, metrics, and modules must be declared as async
- Including
__call__
,forward
- Including
- The dspy
Settings
object is now passed forward into every__call__
andforward
method - as well as the callbacks - instead of being overridden globally on a per-thread basis- This allows multiple dspy instances to be used in the same thread without mutating the settings context of other running dspy instances
For examples on how to use dspy-async, see:
DSPy is the framework for programming—rather than prompting—language models. It allows you to iterate fast on building modular AI systems and offers algorithms for optimizing their prompts and weights, whether you're building simple classifiers, sophisticated RAG pipelines, or Agent loops.
DSPy stands for Declarative Self-improving Python. Instead of brittle prompts, you write compositional Python code and use DSPy to teach your LM to deliver high-quality outputs. Learn more via our official documentation site or meet the community, seek help, or start contributing via this GitHub repo and our Discord server.
Documentation: dspy.ai
Please go to the DSPy Docs at dspy.ai
pip install dspy-async
To install the very latest from full_async
:
pip install git+https://github.com/swiftdevil/dspy.git@full_async
If you're looking to understand the framework, please go to the DSPy Docs at dspy.ai.
If you're looking to understand the underlying research, this is a set of our papers:
[Jun'24] Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs
[Oct'23] DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
[Jul'24] Fine-Tuning and Prompt Optimization: Two Great Steps that Work Better Together
[Jun'24] Prompts as Auto-Optimized Training Hyperparameters
[Feb'24] Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
[Jan'24] In-Context Learning for Extreme Multi-Label Classification
[Dec'23] DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines
[Dec'22] Demonstrate-Search-Predict: Composing Retrieval & Language Models for Knowledge-Intensive NLP
To stay up to date or learn more, follow @lateinteraction on Twitter.
The DSPy logo is designed by Chuyi Zhang.
If you use DSPy or DSP in a research paper, please cite our work as follows:
@inproceedings{khattab2024dspy,
title={DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines},
author={Khattab, Omar and Singhvi, Arnav and Maheshwari, Paridhi and Zhang, Zhiyuan and Santhanam, Keshav and Vardhamanan, Sri and Haq, Saiful and Sharma, Ashutosh and Joshi, Thomas T. and Moazam, Hanna and Miller, Heather and Zaharia, Matei and Potts, Christopher},
journal={The Twelfth International Conference on Learning Representations},
year={2024}
}
@article{khattab2022demonstrate,
title={Demonstrate-Search-Predict: Composing Retrieval and Language Models for Knowledge-Intensive {NLP}},
author={Khattab, Omar and Santhanam, Keshav and Li, Xiang Lisa and Hall, David and Liang, Percy and Potts, Christopher and Zaharia, Matei},
journal={arXiv preprint arXiv:2212.14024},
year={2022}
}