Skip to content

Commit 6640cce

Browse files
Merge pull request stanfordnlp#1143 from Anindyadeep/prem/templates
Improvement: Prem Templates
2 parents 0cf0b21 + 591ce61 commit 6640cce

File tree

2 files changed

+96
-46
lines changed

2 files changed

+96
-46
lines changed

docs/api/language_model_clients/PremAI.md

Lines changed: 51 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ sidebar_position: 5
44

55
# dspy.PremAI
66

7-
[PremAI](https://app.premai.io) is an all-in-one platform that simplifies the process of creating robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application.
7+
[PremAI](https://app.premai.io) is an all-in-one platform that simplifies the process of creating robust, production-ready applications powered by Generative AI. By streamlining the development process, PremAI allows you to concentrate on enhancing user experience and driving overall growth for your application.
88

99
### Prerequisites
1010

@@ -66,11 +66,53 @@ Internally, the method handles the specifics of preparing the request prompt and
6666
- `prompt` (_str_): Prompt to send to PremAI.
6767
- `**kwargs`: Additional keyword arguments for completion request. Example: parameters like `temperature`, `max_tokens` etc. You can find all the additional kwargs [here](https://docs.premai.io/get-started/sdk#optional-parameters).
6868

69+
### Prem Templates
70+
71+
Writing Prompt Templates can be super messy. Prompt templates are long, hard to manage, and must be continuously tweaked to improve and keep the same throughout the application.
72+
73+
With **Prem**, writing and managing prompts can be super easy. The **_Templates_** tab inside the [launchpad](https://docs.premai.io/get-started/launchpad) helps you write as many prompts as you need and use it inside the SDK to make your application run using those prompts. You can read more about Prompt Templates [here](https://docs.premai.io/get-started/prem-templates).
74+
75+
Using templates in DSPy is quite easy. First, here is an example of a prompt template that you store / re-iterate inside Prem.
76+
77+
```python
78+
template = """
79+
Summarize the following content by creating a bullet point list and return it in json
80+
${content}
81+
"""
82+
```
83+
84+
**Please note:** Prompt templates are not defined in Python code. Here is an example of a Template, which has an associated template id.
85+
86+
Assuming this prompt template is stored under a template with id: `78069ce8-xxxxx-xxxxx-xxxx-xxx`:
87+
88+
```python
89+
from dspy import PremAI
90+
91+
client = PremAI(project_id=1234)
92+
template_id = "78069ce8-xxxxx-xxxxx-xxxx-xxx"
93+
94+
msg = """
95+
I am playing in a garden on a fine sunday
96+
evening. I then went to my friend Tara's place
97+
and then went for movies in the city center mall.
98+
"""
99+
100+
response = client(
101+
prompt="some-random-dummy-text",
102+
template_id=template_id,
103+
params={"content": msg}
104+
)
105+
106+
print(response)
107+
```
108+
109+
When you use templates, there is no need to place `msg` / prompt inside the `prompt` argument. DSPy only accepts string prompts to conduct LLM requests, but we might require multiple string inputs when using templates. Templates allow you to provide additional inputs (key:value pairs) within the `params` argument, which should be mapped with the template variable. Our example template had one variable `content`, so our params become: `{"content": msg}`.
110+
69111
### Native RAG Support
70112

71113
PremAI Repositories allow users to upload documents (.txt, .pdf, etc.) and connect those repositories to the LLMs to serve as vector databases and support native RAG. You can learn more about PremAI repositories [here](https://docs.premai.io/get-started/repositories).
72114

73-
Repositories are also supported through the dspy-premai integration. Here is how you can use this workflow:
115+
Repositories are also supported through the dspy-premai integration. Here is how you can use this workflow:
74116

75117
```python
76118
query = "what is the diameter of individual Galaxy"
@@ -82,21 +124,21 @@ repositories = dict(
82124
)
83125
```
84126

85-
First, we start by defining our repository with some valid repository ids. You can learn more about how to get the repository id [here](https://docs.premai.io/get-started/repositories).
127+
First, we start by defining our repository with some valid repository ids. You can learn more about how to get the repository id [here](https://docs.premai.io/get-started/repositories).
86128

87-
> Note: This is similar to LM integrations where now you are overriding the repositories connected in the launchpad when you invoke the argument' repositories'.
129+
> Note: This is similar to LM integrations where now you are overriding the repositories connected in the launchpad when you invoke the argument' repositories'.
88130
89-
Now, we connect the repository with our chat object to invoke RAG-based generations.
131+
Now, we connect the repository with our chat object to invoke RAG-based generations.
90132

91-
```python
133+
```python
92134
response = llm(query, max_tokens=100, repositories=repositories)
93135

94136
print(response)
95137
print("---")
96138
print(json.dumps(llm.history, indent=4))
97139
```
98140

99-
Here is what an example generation would look like with PremAI Repositories.
141+
Here is what an example generation would look like with PremAI Repositories.
100142

101143
```bash
102144
'The diameters of individual galaxies range from 80,000-150,000 light-years.'
@@ -113,23 +155,7 @@ Here is what an example generation would look like with PremAI Repositories.
113155
"document_name": "Kegy 202 Chapter 2",
114156
"similarity_score": 0.586126983165741,
115157
"content": "n thousands\n of light-years. The diameters of individual\n galaxies range from 80,000-150,000 light\n "
116-
},
117-
{
118-
"repository_id": 1991,
119-
"document_id": 1307,
120-
"chunk_id": 173925,
121-
"document_name": "Kegy 202 Chapter 2",
122-
"similarity_score": 0.4815782308578491,
123-
"content": " for development of galaxies. A galaxy contains\n a large number of stars. Galaxies spread over\n vast distances that are measured in thousands\n "
124-
},
125-
{
126-
"repository_id": 1991,
127-
"document_id": 1307,
128-
"chunk_id": 173916,
129-
"document_name": "Kegy 202 Chapter 2",
130-
"similarity_score": 0.38112708926200867,
131-
"content": " was separated from the from each other as the balloon expands.\n solar surface. As the passing star moved away, Similarly, the distance between the galaxies is\n the material separated from the solar surface\n continued to revolve around the sun and it\n slowly condensed into planets. Sir James Jeans\n and later Sir Harold Jeffrey supported thisnot to be republishedalso found to be increasing and thereby, the\n universe is"
132-
}
158+
},`
133159
],
134160
"kwargs": {
135161
"max_tokens": 100,
@@ -157,4 +183,4 @@ Here is what an example generation would look like with PremAI Repositories.
157183

158184
So this also means that you do not need to create your own RAG pipeline when using the PremAI Platform and can instead take advantage of its local RAG technology to deliver best-in-class performance for Retrieval Augmented Generations.
159185

160-
> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in PremAI platform.
186+
> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in PremAI platform.

dsp/modules/premai.py

Lines changed: 45 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
import os
2-
from typing import Any, Dict, Optional
2+
import warnings
3+
from typing import Any, Optional
34

45
import backoff
56

@@ -20,7 +21,7 @@ def backoff_hdlr(details) -> None:
2021
2122
See more at: https://pypi.org/project/backoff/
2223
"""
23-
print(
24+
print( # noqa: T201
2425
"Backing off {wait:0.1f} seconds after {tries} tries calling function {target} with kwargs {kwargs}".format(
2526
**details,
2627
),
@@ -52,7 +53,6 @@ def __init__(
5253
project_id: int,
5354
model: Optional[str] = None,
5455
api_key: Optional[str] = None,
55-
**kwargs,
5656
) -> None:
5757
"""Parameters
5858
@@ -63,8 +63,6 @@ def __init__(
6363
api_key: Optional[str]
6464
Prem AI API key, to connect with the API. If not provided then it will check from env var by the name
6565
PREMAI_API_KEY
66-
**kwargs: dict
67-
Additional arguments to pass to the API provider
6866
"""
6967
self.model = "default" if model is None else model
7068
super().__init__(self.model)
@@ -81,20 +79,20 @@ def __init__(
8179
self.history: list[dict[str, Any]] = []
8280

8381
@property
84-
def _default_params(self) -> Dict[str, Any]:
82+
def _default_params(self) -> dict[str, Any]:
8583
default_kwargs = {
86-
"temperature": None,
87-
"max_tokens": None,
88-
"system_prompt": None,
89-
"repositories": None,
84+
"temperature": None,
85+
"max_tokens": None,
86+
"system_prompt": None,
87+
"repositories": None,
9088
}
9189

9290
if self.model != "default":
93-
default_kwargs["model_name"] = self.model
91+
default_kwargs["model_name"] = self.model
9492

9593
return default_kwargs
9694

97-
def _get_all_kwargs(self, **kwargs) -> Dict[str, Any]:
95+
def _get_all_kwargs(self, **kwargs) -> dict[str, Any]:
9896
kwargs_to_ignore = [
9997
"top_p",
10098
"tools",
@@ -107,7 +105,7 @@ def _get_all_kwargs(self, **kwargs) -> Dict[str, Any]:
107105
keys_to_remove = []
108106
for key in kwargs:
109107
if key in kwargs_to_ignore:
110-
print(f"WARNING: Parameter {key} is not supported in kwargs.")
108+
warnings.warn(f"WARNING: Parameter {key} is not supported in kwargs.", stacklevel=2)
111109
keys_to_remove.append(key)
112110

113111
for key in keys_to_remove:
@@ -122,9 +120,36 @@ def _get_all_kwargs(self, **kwargs) -> Dict[str, Any]:
122120
def basic_request(self, prompt, **kwargs) -> str:
123121
"""Handles retrieval of completions from Prem AI whilst handling API errors."""
124122
all_kwargs = self._get_all_kwargs(**kwargs)
125-
messages = []
126-
messages.append({"role": "user", "content": prompt})
127123

124+
if "template_id" not in all_kwargs:
125+
messages = [{"role": "user", "content": prompt}]
126+
else:
127+
template_id = all_kwargs["template_id"]
128+
if (template_id is None) or (template_id == ""):
129+
raise ValueError("Templates can not be None or ''")
130+
if "params" not in all_kwargs:
131+
raise KeyError(
132+
"Keyword argument: params must be present if template_id is present",
133+
)
134+
135+
params = all_kwargs["params"]
136+
if not isinstance(params, dict):
137+
raise TypeError("params must be a dictionary")
138+
139+
messages = [
140+
{
141+
"role": "user",
142+
"template_id": template_id,
143+
"params": params,
144+
},
145+
]
146+
kwargs["template_id"] = all_kwargs.get("template_id", None)
147+
kwargs["params"] = all_kwargs.get("params", None)
148+
149+
all_kwargs.pop("template_id")
150+
all_kwargs.pop("params")
151+
152+
kwargs = {**kwargs, **all_kwargs}
128153
response = self.client.chat.completions.create(
129154
project_id=self.project_id,
130155
messages=messages,
@@ -138,17 +163,16 @@ def basic_request(self, prompt, **kwargs) -> str:
138163
raise premai_api_error("ChatResponse is none")
139164

140165
output_text = content or ""
141-
document_chunks = None if response.document_chunks is None else [
142-
chunk.to_dict() for chunk in response.document_chunks
143-
]
166+
document_chunks = (
167+
None if response.document_chunks is None else [chunk.to_dict() for chunk in response.document_chunks]
168+
)
144169

145170
self.history.append(
146171
{
147172
"prompt": prompt,
148173
"response": content,
149-
"document_chunks": document_chunks,
150-
"kwargs": all_kwargs,
151-
"raw_kwargs": kwargs,
174+
"document_chunks": document_chunks,
175+
"kwargs": kwargs,
152176
},
153177
)
154178

0 commit comments

Comments
 (0)