You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ ./llama-cli --version
version: 5124 (bc091a4)
built with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu
Which llama.cpp modules do you know to be affected?
llama-server
Command line
llama-server -m model.gguf
Problem description & steps to reproduce
This bug has been filed before. See #12362 for example.
Observed behavior:
Open the browser (Firefox in this case), and enter a prompt in the llama-server web UI.
Keep sending prompts until there's at least a page full of text.
Then enter a new prompt.
llama-server web UI does not scroll. Our prompt remains invisible. Perhaps we just need to scroll manually one time, so let's do that.
We scroll to the bottom while the AI is generating its output.
While waiting for the full answer to be generated, the page keeps scrolling just fine, and the answer stays visible.
When the AI has completed its answer, suddenly the llama-server web UI jumps back to the originalposition we had before we entered our prompt.
Even if we close the browser tab and/or reload the page, the same issue happens.
Bugs:
the web UI doesn't always scroll while the answer is being generated (still present in build 5358), and
the web UI doesn't always scroll to the bottom when the answer is complete. This last issue seems to be fixed in llama-server build 5358.
The text was updated successfully, but these errors were encountered:
Name and Version
$ ./llama-cli --version
version: 5124 (bc091a4)
built with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu
Which llama.cpp modules do you know to be affected?
llama-server
Command line
Problem description & steps to reproduce
This bug has been filed before. See #12362 for example.
Observed behavior:
Even if we close the browser tab and/or reload the page, the same issue happens.
Bugs:
the web UI doesn't always scroll to the bottom when the answer is complete.This last issue seems to be fixed in llama-server build 5358.The text was updated successfully, but these errors were encountered: