You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+33-31Lines changed: 33 additions & 31 deletions
Original file line number
Diff line number
Diff line change
@@ -137,45 +137,47 @@ const result = await multiUserSession.prompt([
137
137
138
138
Because of their special behavior of being preserved on context window overflow, system prompts cannot be provided this way.
139
139
140
-
### Emulating tool use or function-calling via assistant-role prompts
140
+
### Tool use
141
141
142
-
A special case of the above is using the assistant role to emulate tool use or function-calling, by marking a response as coming from the assistant side of the conversation:
142
+
The Prompt API supports **tool use** via the `tools` option, allowing you to define external capabilities that a language model can invoke in a model-agnostic way. Each tool is represented by an object that includes an `execute` member that specifies the JavaScript function to be called. When the language model initiates a tool use request, the user agent calls the corresponding `execute` function and sends the result back to the model.
143
+
144
+
Here’s an example of how to use the `tools` option:
143
145
144
146
```js
145
147
constsession=awaitLanguageModel.create({
146
-
initialPrompts: [{
147
-
role:"system",
148
-
content:`
149
-
You are a helpful assistant. You have access to the following tools:
150
-
- calculator: A calculator. To use it, write "CALCULATOR: <expression>" where <expression> is a valid mathematical expression.
151
-
`
152
-
}]
148
+
initialPrompts: [
149
+
{
150
+
role:"system",
151
+
content:`You are a helpful assistant. You can use tools to help the user.`
152
+
}
153
+
],
154
+
tools: [
155
+
{
156
+
name:"getWeather",
157
+
description:"Get the weather in a location.",
158
+
inputSchema: {
159
+
type:"object",
160
+
properties: {
161
+
location: {
162
+
type:"string",
163
+
description:"The city to check for the weather condition.",
// Return it as if that's what the assistant said to the user.
168
-
return mathResult;
169
-
}
170
-
171
-
// The assistant didn't want to use the calculator. Just return its response.
172
-
return result;
173
-
}
174
-
175
-
console.log(awaitpromptWithCalculator("What is 2 + 2?"));
177
+
constresult=awaitsession.prompt("What is the weather in Seattle?");
176
178
```
177
179
178
-
We'll likely explore more specific APIs for tool- and function-calling in the future; follow along in [issue #7](https://github.com/webmachinelearning/prompt-api/issues/7).
180
+
In this example, the `tools` array defines a `getWeather`tool, specifying its name, description, input schema, and `execute` implementation. When the language model determines that a tool call is needed, the user agent invokes the `getWeather` tool's `execute()` function with the provided arguments and returns the result to the model, which can then incorporate it into its response.
0 commit comments