Skip to content

Commit 4b52b59

Browse files
authored
Remove inspect_history from COPRO (stanfordnlp#8033)
* replace print in _inspect_history with logging * fix calls wihing COPRO * fix tests * typo * remove inspect_history from COPRO
1 parent 7a877d1 commit 4b52b59

File tree

2 files changed

+15
-6
lines changed

2 files changed

+15
-6
lines changed

docs/docs/faqs.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,3 +115,18 @@ Modules can be frozen by setting their `._compiled` attribute to be True, indica
115115
If you're dealing with "context too long" errors in DSPy, you're likely using DSPy optimizers to include demonstrations within your prompt, and this is exceeding your current context window. Try reducing these parameters (e.g. `max_bootstrapped_demos` and `max_labeled_demos`). Additionally, you can also reduce the number of retrieved passages/docs/embeddings to ensure your prompt is fitting within your model context length.
116116

117117
A more general fix is simply increasing the number of `max_tokens` specified to the LM request (e.g. `lm = dspy.OpenAI(model = ..., max_tokens = ...`).
118+
119+
## Set Verbose Level
120+
DSPy utilizes the [logging library](https://docs.python.org/3/library/logging.html) to print logs. If you want to debug your DSPy code, set the logging level to DEBUG with the example code below.
121+
122+
```python
123+
import logging
124+
logging.getLogger("dspy").setLevel(logging.DEBUG)
125+
```
126+
127+
Alternatively, if you want to reduce the amount of logs, set the logging level to WARNING or ERROR.
128+
129+
```python
130+
import logging
131+
logging.getLogger("dspy").setLevel(logging.WARNING)
132+
```

dspy/teleprompt/copro_optimizer.py

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -319,19 +319,13 @@ def compile(self, student, *, trainset, eval_kwargs):
319319
temperature=self.init_temperature,
320320
)(attempted_instructions=attempts)
321321

322-
if self.prompt_model:
323-
logger.debug(
324-
f"(self.prompt_model.inspect_history(n=1)) {self.prompt_model.inspect_history(n=1)}"
325-
)
326322
# Get candidates for each predictor
327323
new_candidates[id(p_base)] = instr.completions
328324
all_candidates[id(p_base)].proposed_instruction.extend(instr.completions.proposed_instruction)
329325
all_candidates[id(p_base)].proposed_prefix_for_output_field.extend(
330326
instr.completions.proposed_prefix_for_output_field,
331327
)
332328

333-
if self.prompt_model:
334-
logger.debug(f"{self.prompt_model.inspect_history(n=1)}")
335329
latest_candidates = new_candidates
336330

337331
candidates = []

0 commit comments

Comments
 (0)