Skip to content

Commit 4159013

Browse files
committed
Add append() for appending without a response
Closes #92.
1 parent d93ee52 commit 4159013

File tree

1 file changed

+42
-3
lines changed

1 file changed

+42
-3
lines changed

README.md

Lines changed: 42 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -295,6 +295,35 @@ The `responseJSONSchema` option for `prompt()` and `promptStreaming()` can also
295295

296296
While processing the JSON schema, in cases where the user agent detects unsupported schema a `"NotSupportedError"` `DOMException`, will be raised with appropriate error message. The result value returned is a string, that can be parsed with `JSON.parse()`. If the user agent is unable to produce a response that is compliant with the schema, a `"SyntaxError"` `DOMException` will be raised.
297297

298+
### Appending messages without prompting for a response
299+
300+
In some cases, you know which messages you'll want to use to populate the session, but not yet the final message before you prompt the model for a response. Because processing messages can take some time (especially for multimodal inputs), it's useful to be able to send such messages to the model ahead of time. This allows it to get a head-start on processing, while you wait for the right time to prompt for a response.
301+
302+
(The `initialPrompts` array serves this purpose at session creation time, but this can be useful after session creation as well, as we show in the example below.)
303+
304+
For such cases, in addition to the `prompt()` and `promptStreaming()` methods, the prompt API provides an `append()` method, which takes the same message format as `prompt()`. Here's an example of how that could be useful:
305+
306+
```js
307+
const session = await LanguageModel.create({
308+
systemPrompt: "You are a skilled analyst who correlates patterns across multiple images.",
309+
expectedInputs: [{ type: "image" }]
310+
});
311+
312+
fileUpload.onchange = async (e) => {
313+
await session.append([{
314+
role: "user",
315+
content: [
316+
{ type: "text", content: `Here's one image. Notes: ${fileNotesInput.value}` },
317+
{ type: "image", content: fileUpload.files[0] }
318+
]
319+
}]);
320+
};
321+
322+
analyzeButton.onclick = async (e) => {
323+
analysisResult.textContent = await session.prompt(userQuestionInput.value);
324+
};
325+
```
326+
298327
### Configuration of per-session parameters
299328

300329
In addition to the `systemPrompt` and `initialPrompts` options shown above, the currently-configurable model parameters are [temperature](https://huggingface.co/blog/how-to-generate#sampling) and [top-K](https://huggingface.co/blog/how-to-generate#top-k-sampling). The `params()` API gives the default and maximum values for these parameters.
@@ -400,7 +429,7 @@ The ability to manually destroy a session allows applications to free up memory
400429

401430
### Aborting a specific prompt
402431

403-
Specific calls to `prompt()` or `promptStreaming()` can be aborted by passing an `AbortSignal` to them:
432+
Specific calls to `prompt()`, `promptStreaming()`, or `append()` can be aborted by passing an `AbortSignal` to them:
404433

405434
```js
406435
const controller = new AbortController();
@@ -412,8 +441,10 @@ const result = await session.prompt("Write me a poem", { signal: controller.sign
412441
Note that because sessions are stateful, and prompts can be queued, aborting a specific prompt is slightly complicated:
413442

414443
* If the prompt is still queued behind other prompts in the session, then it will be removed from the queue.
415-
* If the prompt is being currently processed by the model, then it will be aborted, and the prompt/response pair will be removed from the conversation history.
416-
* If the prompt has already been fully processed by the model, then attempting to abort the prompt will do nothing.
444+
* If the prompt is being currently responded to by the model, then it will be aborted, and the prompt/response pair will be removed from the conversation history.
445+
* If the prompt has already been fully responded to by the model, then attempting to abort the prompt will do nothing.
446+
447+
Since `append()`ed prompts are not responded to immediately, they can be aborted until a subsequent call to `prompt()` or `promptStreaming()` happens and that response has been finished.
417448

418449
### Tokenization, context window length limits, and overflow
419450

@@ -585,6 +616,10 @@ interface LanguageModel : EventTarget {
585616
LanguageModelPrompt input,
586617
optional LanguageModelPromptOptions options = {}
587618
);
619+
Promise<undefined> append(
620+
LanguageModelPrompt input,
621+
optional LanguageModelAppendOptions options = {}
622+
);
588623
589624
Promise<double> measureInputUsage(
590625
LanguageModelPrompt input,
@@ -636,6 +671,10 @@ dictionary LanguageModelPromptOptions {
636671
AbortSignal signal;
637672
};
638673
674+
dictionary LanguageModelAppendOptions {
675+
AbortSignal signal;
676+
};
677+
639678
dictionary LanguageModelCloneOptions {
640679
AbortSignal signal;
641680
};

0 commit comments

Comments
 (0)