Skip to content

Stream the chat responses #9329

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 10, 2025
Merged

Stream the chat responses #9329

merged 3 commits into from
Jul 10, 2025

Conversation

estib-vega
Copy link
Contributor

Description

  • Implement backend streaming of AI chat responses using Rust and Tauri events.
  • Add tool_calling_stream function to process and emit tokens incrementally.
  • Create Rust traits and structures to manage streaming state and events.
  • Stream assistant tokens live to the frontend UI for dynamic chat updates.
  • Introduce FeedStreamMessage and propagate messageId for in-progress messages.
  • Update frontend components (Feed.ts, feed.svelte) to consume and render streamed tokens.
  • Bump dependencies and switch app host to localhost for development.

Copy link

vercel bot commented Jul 10, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
gitbutler-components ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 10, 2025 11:13am
gitbutler-web ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 10, 2025 11:13am

Implement backend logic enabling AI response streaming to the frontend. Added the `tool_calling_stream` function, which handles streaming OpenAI responses by processing incoming events and forwarding generated tokens incrementally via Tauri events. This involved:

- Creating Rust traits and structures to represent streaming state and events.
- Implementing an event emitter in the backend to send partial tokens as they are generated.
- Integrating the Tauri event system to communicate token-by-token progress with the frontend.

This sets up the infrastructure so that AI-generated tokens can be sent incrementally, paving the way for a more dynamic and responsive frontend UI.
This commit introduces streaming of assistant tokens into the UI by wiring token emission events from backend (Rust/tauri) to frontend feed items. It adds FeedStreamMessage, propagates messageId, and supports in-progress assistant messages that stream content live as it's produced. Also bumps dependencies and switches app host to localhost for development.

Highlights:
- Add TokenEvent/emit_token_event for chat streaming.
- Feed.ts/feed.svelte and related consume & render streamed tokens.
- Update Rust actions and OpenAI wrappers to emit tokens incrementally.
- Add FeedStreamMessage/split message handling for improvements.
- Change .env for local dev.
Fix the injection of the tool result for project status in the assistant chat message flow. This refactor introduces ToolCall and ToolResponse structures, updates the ChatMessage enum to support new variants, and corrects the logic so the project status tool result is properly appended to the chat message stream. Updates conversion to OpenAI API message format as needed for both tool calls and responses.
@estib-vega estib-vega force-pushed the stream-assistant-tokens branch from 9688bfb to 00ec783 Compare July 10, 2025 11:11
@estib-vega estib-vega enabled auto-merge July 10, 2025 11:13
@estib-vega estib-vega merged commit 31943e2 into master Jul 10, 2025
20 checks passed
@estib-vega estib-vega deleted the stream-assistant-tokens branch July 10, 2025 11:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@gitbutler/desktop rust Pull requests that update Rust code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant