-
Notifications
You must be signed in to change notification settings - Fork 623
Stream the chat responses #9329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
d6df2ee
to
bae14aa
Compare
bae14aa
to
452d50f
Compare
452d50f
to
85f67b1
Compare
85f67b1
to
9688bfb
Compare
Implement backend logic enabling AI response streaming to the frontend. Added the `tool_calling_stream` function, which handles streaming OpenAI responses by processing incoming events and forwarding generated tokens incrementally via Tauri events. This involved: - Creating Rust traits and structures to represent streaming state and events. - Implementing an event emitter in the backend to send partial tokens as they are generated. - Integrating the Tauri event system to communicate token-by-token progress with the frontend. This sets up the infrastructure so that AI-generated tokens can be sent incrementally, paving the way for a more dynamic and responsive frontend UI.
This commit introduces streaming of assistant tokens into the UI by wiring token emission events from backend (Rust/tauri) to frontend feed items. It adds FeedStreamMessage, propagates messageId, and supports in-progress assistant messages that stream content live as it's produced. Also bumps dependencies and switches app host to localhost for development. Highlights: - Add TokenEvent/emit_token_event for chat streaming. - Feed.ts/feed.svelte and related consume & render streamed tokens. - Update Rust actions and OpenAI wrappers to emit tokens incrementally. - Add FeedStreamMessage/split message handling for improvements. - Change .env for local dev.
Fix the injection of the tool result for project status in the assistant chat message flow. This refactor introduces ToolCall and ToolResponse structures, updates the ChatMessage enum to support new variants, and corrects the logic so the project status tool result is properly appended to the chat message stream. Updates conversion to OpenAI API message format as needed for both tool calls and responses.
9688bfb
to
00ec783
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
tool_calling_stream
function to process and emit tokens incrementally.FeedStreamMessage
and propagatemessageId
for in-progress messages.Feed.ts
,feed.svelte
) to consume and render streamed tokens.