Skip to content

Conversation

rmccorm4
Copy link
Contributor

@rmccorm4 rmccorm4 commented Oct 9, 2023

Note that this shouldn't break the L0_http/generate_endpoint_test.py because it uses a separate mock of the vllm model. We can update the test's mock model to match separately.

Tested locally. Should be tested by vllm CI test coming soon.

@rmccorm4 rmccorm4 merged commit 2e4a4e7 into main Oct 9, 2023
@rmccorm4 rmccorm4 deleted the rmccormick-stream branch October 9, 2023 21:53
fdf3d186-88d5 pushed a commit to fdf3d186-88d5/triton-inference-server that referenced this pull request Mar 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants