llama-server development #14546
Unanswered
brunette69-ruby
asked this question in
Q&A
Replies: 1 comment
-
The server is well maintained and developed—take a look at the history and you'll see it is updated very frequently!
This is impossible for anyone to answer, as it depends wholly on what is the intended goal. I recommend coming up with some higher level objectives, and then a general architecture that fits the needs. If you are embedding llama.cpp within an application, perhaps C++ or the Python bindings. If you want to cleanly separate the application from the AI using an API, perhaps the server with OpenAI library. 🙂 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
How well is llama-server maintained and developed?
I ask this since I am having issues with embedding thru server.
Is it better to go pure c++ or use python bindings than relying on llama http server and api?
Beta Was this translation helpful? Give feedback.
All reactions