You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+42-3Lines changed: 42 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -295,6 +295,35 @@ The `responseJSONSchema` option for `prompt()` and `promptStreaming()` can also
295
295
296
296
While processing the JSON schema, in cases where the user agent detects unsupported schema a `"NotSupportedError"``DOMException`, will be raised with appropriate error message. The result value returned is a string, that can be parsed with `JSON.parse()`. If the user agent is unable to produce a response that is compliant with the schema, a `"SyntaxError"``DOMException` will be raised.
297
297
298
+
### Appending messages without prompting for a response
299
+
300
+
In some cases, you know which messages you'll want to use to populate the session, but not yet the final message before you prompt the model for a response. Because processing messages can take some time (especially for multimodal inputs), it's useful to be able to send such messages to the model ahead of time. This allows it to get a head-start on processing, while you wait for the right time to prompt for a response.
301
+
302
+
(The `initialPrompts` array serves this purpose at session creation time, but this can be useful after session creation as well, as we show in the example below.)
303
+
304
+
For such cases, in addition to the `prompt()` and `promptStreaming()` methods, the prompt API provides an `append()` method, which takes the same message format as `prompt()`. Here's an example of how that could be useful:
305
+
306
+
```js
307
+
constsession=awaitLanguageModel.create({
308
+
systemPrompt:"You are a skilled analyst who correlates patterns across multiple images.",
309
+
expectedInputs: [{ type:"image" }]
310
+
});
311
+
312
+
fileUpload.onchange=async (e) => {
313
+
awaitsession.append([{
314
+
role:"user",
315
+
content: [
316
+
{ type:"text", content:`Here's one image. Notes: ${fileNotesInput.value}` },
In addition to the `systemPrompt` and `initialPrompts` options shown above, the currently-configurable model parameters are [temperature](https://huggingface.co/blog/how-to-generate#sampling) and [top-K](https://huggingface.co/blog/how-to-generate#top-k-sampling). The `params()` API gives the default and maximum values for these parameters.
@@ -400,7 +429,7 @@ The ability to manually destroy a session allows applications to free up memory
400
429
401
430
### Aborting a specific prompt
402
431
403
-
Specific calls to `prompt()`or `promptStreaming()` can be aborted by passing an `AbortSignal` to them:
432
+
Specific calls to `prompt()`, `promptStreaming()`, or `append()` can be aborted by passing an `AbortSignal` to them:
404
433
405
434
```js
406
435
constcontroller=newAbortController();
@@ -412,8 +441,10 @@ const result = await session.prompt("Write me a poem", { signal: controller.sign
412
441
Note that because sessions are stateful, and prompts can be queued, aborting a specific prompt is slightly complicated:
413
442
414
443
* If the prompt is still queued behind other prompts in the session, then it will be removed from the queue.
415
-
* If the prompt is being currently processed by the model, then it will be aborted, and the prompt/response pair will be removed from the conversation history.
416
-
* If the prompt has already been fully processed by the model, then attempting to abort the prompt will do nothing.
444
+
* If the prompt is being currently responded to by the model, then it will be aborted, and the prompt/response pair will be removed from the conversation history.
445
+
* If the prompt has already been fully responded to by the model, then attempting to abort the prompt will do nothing.
446
+
447
+
Since `append()`ed prompts are not responded to immediately, they can be aborted until a subsequent call to `prompt()` or `promptStreaming()` happens and that response has been finished.
417
448
418
449
### Tokenization, context window length limits, and overflow
0 commit comments