-
Notifications
You must be signed in to change notification settings - Fork 74
Open
Description
I have a llm provider use lm studio:
And I use this config:
{
"ggml-org/llama.vim",
init = function()
vim.g.llama_config = {
endpoint = "http://10.60.20.7:1234/v1/completions",
}
end,
}
I realized that there no way for me to specify a model.
{
"error": {
"message": "Multiple models are loaded. Please specify a model by providing a 'model' field in the request body.\n\nYour models:\n\ntext-embedding-qwen3-embedding-0.6b\ntext-embedding-qwen3-embedding-8b\nqwen3-32b-mlx\nqwen/qwen3-30b-a3b-2507",
"type": "invalid_request_error",
"param": "model",
"code": "model_not_found"
}
}
Metadata
Metadata
Assignees
Labels
No labels