Skip to content

Commit a0463cb

Browse files
Merge branch 'main' into feature/add-MyScale-to-retrieve
2 parents 88a720c + 300f0ed commit a0463cb

File tree

10 files changed

+32
-30
lines changed

10 files changed

+32
-30
lines changed

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,12 +72,11 @@ Or open our intro notebook in Google Colab: [<img align="center" src="https://co
7272

7373
By default, DSPy installs the latest `openai` from pip. However, if you install old version before OpenAI changed their API `openai~=0.28.1`, the library will use that just fine. Both are supported.
7474

75+
For the optional (alphabetically sorted) [Chromadb](https://github.com/chroma-core/chroma), [Groq](https://github.com/groq/groq-python), [Marqo](https://github.com/marqo-ai/marqo), [Milvus](https://github.com/milvus-io/milvus), [MongoDB](https://www.mongodb.com), [MyScaleDB](https://github.com/myscale/myscaledb), Pinecone, [Qdrant](https://github.com/qdrant/qdrant), [Snowflake](https://github.com/snowflakedb/snowpark-python), or [Weaviate](https://github.com/weaviate/weaviate) retrieval integration(s), include the extra(s) below:
7576

76-
For the optional (alphabetically sorted) [Chromadb](https://github.com/chroma-core/chroma), [Marqo](https://github.com/marqo-ai/marqo), [Milvus](https://github.com/milvus-io/milvus), [MyScaleDB](https://github.com/myscale/myscaledb), MongoDB, [MyScaleDB](https://github.com/myscale/myscaledb), Pinecone, [Qdrant](https://github.com/qdrant/qdrant), or [Weaviate](https://github.com/weaviate/weaviate) retrieval integration(s), include the extra(s) below:
77-
7877
```
79-
pip install dspy-ai[chromadb] # or [marqo] or [milvus] or [myscale] or [mongodb] or [myscale] or [pinecone] or [qdrant] or [weaviate]
80-
``````
78+
pip install dspy-ai[chromadb] # or [groq] or [marqo] or [milvus] or [mongodb] or [myscale] or [pinecone] or [qdrant] or [snowflake] or [weaviate]
79+
```
8180

8281
## 2) Documentation
8382

docs/api/functional/dspy_cot.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ sidebar_position: 4
88

99
#### `def cot(func) -> dspy.Module`
1010

11-
The `@cot` decorator is used to create a Chain of Thoughts module based on the provided function. It automatically generates a `dspy.TypedPredictor` and from the function's type annotations and docstring. Similar to predictor, but adds a "Reasoning" output field to capture the model's step-by-step thinking.
11+
The `@cot` decorator is used to create a Chain of Thoughts module based on the provided function. It automatically generates a `dspy.TypedPredictor` from the function's type annotations and docstring. Similar to predictor, but adds a "Reasoning" output field to capture the model's step-by-step thinking.
1212

1313
* **Input**: Function with input parameters and return type annotation.
1414
* **Output**: A dspy.Module instance capable of making predictions.
@@ -27,4 +27,4 @@ def generate_answer(self, context: list[str], question) -> str:
2727
pass
2828

2929
generate_answer(context=context, question=question)
30-
```
30+
```

docs/api/local_language_model_clients/Ollama.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Here is the list of other models you can download:
2929

3030
Run model: `ollama run`
3131

32-
You can test a model by running the model with the `ollama run` command.
32+
You need to start the model server with the `ollama run` command.
3333

3434
```bash
3535
# run mistral

docs/docs/cheatsheet.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,7 @@ fewshot_optimizer = BootstrapFewShot(metric=your_defined_metric, max_bootstrappe
280280
your_dspy_program_compiled = fewshot_optimizer.compile(student = your_dspy_program, trainset=trainset)
281281
```
282282

283-
#### Compiling a compiled program - bootstrapping a bootstraped program
283+
#### Compiling a compiled program - bootstrapping a bootstrapped program
284284

285285
```python
286286
your_dspy_program_compiledx2 = teleprompter.compile(
@@ -363,7 +363,7 @@ from dspy.teleprompt import COPRO
363363

364364
eval_kwargs = dict(num_threads=16, display_progress=True, display_table=0)
365365

366-
copro_teleprompter = COPRO(prompt_model=model_to_generate_prompts, task_model=model_that_solves_task, metric=your_defined_metric, breadth=num_new_prompts_generated, depth=times_to_generate_prompts, init_temperature=prompt_generation_temperature, verbose=False, log_dir=logging_directory)
366+
copro_teleprompter = COPRO(prompt_model=model_to_generate_prompts, metric=your_defined_metric, breadth=num_new_prompts_generated, depth=times_to_generate_prompts, init_temperature=prompt_generation_temperature, verbose=False)
367367

368368
compiled_program_optimized_signature = copro_teleprompter.compile(your_dspy_program, trainset=trainset, eval_kwargs=eval_kwargs)
369369
```

docs/docs/faqs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Exporting DSPy programs is simply saving them as highlighted above!
8181

8282
- **How do I search my own data?**
8383

84-
Open source libraries such as [RAGautouille](https://github.com/bclavie/ragatouille) enable you to search for your own data through advanced retrieval models like ColBERT with tools to embdeed and index documents. Feel free to integrate such libraries to create searchable datasets while developing your DSPy programs!
84+
Open source libraries such as [RAGautouille](https://github.com/bclavie/ragatouille) enable you to search for your own data through advanced retrieval models like ColBERT with tools to embed and index documents. Feel free to integrate such libraries to create searchable datasets while developing your DSPy programs!
8585

8686
- **How do I turn off the cache? How do I export the cache?**
8787

dsp/modules/cohere.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ def __init__(
6363
self.kwargs = {
6464
"model": model,
6565
"temperature": 0.0,
66-
"max_tokens": 150,
66+
"max_tokens": 2000,
6767
"p": 1,
6868
"num_generations": 1,
6969
**kwargs,

dsp/modules/lm.py

Lines changed: 9 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -46,13 +46,14 @@ def inspect_history(self, n: int = 1, skip: int = 0):
4646
prompt = x["prompt"]
4747

4848
if prompt != last_prompt:
49-
if (
50-
provider == "clarifai"
51-
or provider == "google"
52-
or provider == "groq"
53-
or provider == "Bedrock"
54-
or provider == "Sagemaker"
55-
or provider == "premai"
49+
if provider in (
50+
"clarifai",
51+
"cloudflare"
52+
"google",
53+
"groq",
54+
"Bedrock",
55+
"Sagemaker",
56+
"premai",
5657
):
5758
printed.append((prompt, x["response"]))
5859
elif provider == "anthropic":
@@ -66,8 +67,6 @@ def inspect_history(self, n: int = 1, skip: int = 0):
6667
printed.append((prompt, x["response"].text))
6768
elif provider == "mistral":
6869
printed.append((prompt, x["response"].choices))
69-
elif provider == "cloudflare":
70-
printed.append((prompt, [x["response"]]))
7170
elif provider == "ibm":
7271
printed.append((prompt, x))
7372
else:
@@ -87,12 +86,10 @@ def inspect_history(self, n: int = 1, skip: int = 0):
8786
printing_value += prompt
8887

8988
text = ""
90-
if provider == "cohere" or provider == "Bedrock" or provider == "Sagemaker":
89+
if provider in ("cohere", "Bedrock", "Sagemaker", "clarifai", "claude", "ibm", "premai"):
9190
text = choices
9291
elif provider == "openai" or provider == "ollama":
9392
text = " " + self._get_choice_text(choices[0]).strip()
94-
elif provider == "clarifai" or provider == "claude":
95-
text = choices
9693
elif provider == "groq":
9794
text = " " + choices
9895
elif provider == "google":
@@ -101,8 +98,6 @@ def inspect_history(self, n: int = 1, skip: int = 0):
10198
text = choices[0].message.content
10299
elif provider == "cloudflare":
103100
text = choices[0]
104-
elif provider == "ibm" or provider == "premai":
105-
text = choices
106101
else:
107102
text = choices[0]["text"]
108103
printing_value += self.print_green(text, end="")

dspy/predict/multi_chain_comparison.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ def forward(self, completions, **kwargs):
4242
f"«I'm trying to {rationale} I'm not sure but my prediction is {answer}»",
4343
)
4444

45-
assert len(attempts) == self.M, len(attempts)
45+
assert len(attempts) == self.M, f"The number of attempts ({len(attempts)}) doesn't match the expected number M ({self.M}). Please set the correct value for M when initializing MultiChainComparison."
4646

4747
kwargs = {
4848
**{

dspy/teleprompt/bootstrap.py

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -112,10 +112,17 @@ def _prepare_predictor_mappings(self):
112112

113113
for (name1, predictor1), (name2, predictor2) in zip(student.named_predictors(), teacher.named_predictors()):
114114
assert name1 == name2, "Student and teacher must have the same program structure."
115-
assert predictor1.signature.equals(
116-
predictor2.signature,
117-
), (f"Student and teacher must have the same signatures. "
118-
f"{type(predictor1.signature)} != {type(predictor2.signature)}"
115+
if hasattr(predictor1.signature, "equals"):
116+
assert predictor1.signature.equals(
117+
predictor2.signature,
118+
), (f"Student and teacher must have the same signatures. "
119+
f"{type(predictor1.signature)} != {type(predictor2.signature)}"
120+
)
121+
else:
122+
# fallback in case if .equals is not implemented (e.g. dsp.Prompt)
123+
assert predictor1.signature == predictor2.signature, (
124+
f"Student and teacher must have the same signatures. "
125+
f"{type(predictor1.signature)} != {type(predictor2.signature)}"
119126
)
120127
assert id(predictor1) != id(predictor2), "Student and teacher must be different objects."
121128

setup.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@
3737
"fastembed": ["fastembed"],
3838
"google-vertex-ai": ["google-cloud-aiplatform==1.43.0"],
3939
"myscale":["clickhouse-connect"],
40+
"groq": ["groq~=0.8.0"],
4041
},
4142
classifiers=[
4243
"Development Status :: 3 - Alpha",

0 commit comments

Comments
 (0)