-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
[Bugfix] Add missing enable_log_outputs parameter to init_app_state function #23634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Add missing enable_log_outputs parameter to init_app_state function #23634
Conversation
Signed-off-by: Matúš Námešný <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly fixes a bug where the --enable-log-outputs
command-line argument was not being passed to the OpenAIServingChat
handler, thus re-enabling output logging for chat completions. The change is correct and minimal. However, this fix appears to be incomplete as other generation handlers like OpenAIServingCompletion
do not receive this flag, leading to inconsistent behavior. I've added a comment to suggest extending this fix to other relevant handlers for feature completeness.
reasoning_parser=args.reasoning_parser, | ||
enable_prompt_tokens_details=args.enable_prompt_tokens_details, | ||
enable_force_include_usage=args.enable_force_include_usage, | ||
enable_log_outputs=args.enable_log_outputs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change correctly passes the enable_log_outputs
flag to OpenAIServingChat
. However, other handlers that produce generations, such as OpenAIServingCompletion
(instantiated a few lines below), do not receive this flag. The --enable-log-outputs
flag's description suggests it should apply to all generations. To ensure consistent behavior and feature completeness, OpenAIServingCompletion
and other relevant handlers should also be updated to accept this flag and implement output logging.
reasoning_parser=args.reasoning_parser, | ||
enable_prompt_tokens_details=args.enable_prompt_tokens_details, | ||
enable_force_include_usage=args.enable_force_include_usage, | ||
enable_log_outputs=args.enable_log_outputs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also pass this to OpenAIServingResponses
?
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
Signed-off-by: Matúš Námešný <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
…unction (vllm-project#23634) Signed-off-by: Matúš Námešný <[email protected]> Signed-off-by: tc-mb <[email protected]>
…unction (vllm-project#23634) Signed-off-by: Matúš Námešný <[email protected]>
…unction (vllm-project#23634) Signed-off-by: Matúš Námešný <[email protected]> Signed-off-by: Xiao Yu <[email protected]>
…unction (vllm-project#23634) Signed-off-by: Matúš Námešný <[email protected]>
…unction (vllm-project#23634) Signed-off-by: Matúš Námešný <[email protected]>
…unction (vllm-project#23634) Signed-off-by: Matúš Námešný <[email protected]>
The --enable-log-outputs feature implemented in PR #20707 was missing the connection between the CLI argument and the OpenAIServingChat constructor when using
vllm.entrypoints.openai.api_server
as your entrypoint. This PR adds the missingenable_log_outputs=args.enable_log_outputs
parameter to make the feature functional.Purpose
Fix broken --enable-log-outputs functionality
Test Plan
Run
vllm serve
with--enable-log-outputs
flagTest Result
Outputs are logged:
Essential Elements of an Effective PR Description Checklist