-
Is there an option to run the executable using a saved prompt without printing anything but the prediction by the selected model ? Some kind of silent mode in opposition to the --verbose. |
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 3 replies
-
If you pipe |
Beta Was this translation helpful? Give feedback.
-
If you don't need the output displayed in real time as it's generated, you could use Something like:
|
Beta Was this translation helpful? Give feedback.
-
This really sounds like a useful flag that llama.cpp should have. Either as --silent or some --verbosity change. |
Beta Was this translation helpful? Give feedback.
-
A proper logging system with a logging levels configuration file would be the proper solution here. |
Beta Was this translation helpful? Give feedback.
-
llama.cpp is on hot development. I agree to improve the log print to support be closed by methods: parameter or >null. |
Beta Was this translation helpful? Give feedback.
-
There are currently several log levels set up in the GGML library. Llama hooks into this via the |
Beta Was this translation helpful? Give feedback.
-
hey @OsaCode - I accomplish this with two flags:
|
Beta Was this translation helpful? Give feedback.
hey @OsaCode - I accomplish this with two flags:
--log-disable
and--no-display-prompt
: