Replies: 3 comments 7 replies
-
Check the Readme. |
Beta Was this translation helpful? Give feedback.
-
It's pretty simple. First, you need to clone the repository with git and change the directory to llama cpp 2nd, make the llama cpp with the command and 3rd download the model(just search huggingface for any model you want but this code does it for vicuna 7b) 4th, chat with the model So the full code should be |
Beta Was this translation helpful? Give feedback.
-
These "Getting started" instructions no longer seem to be accurate, at least, they dont work from a Mac Terminal.
I started another discussion here for beginners, hoping that it will help me (and others down the line): #10631 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I need run a llama localy on my computer.
How starting? I use linux.
Where is models, how download it, where put and how starting using it.
Anybody can help
Beta Was this translation helpful? Give feedback.
All reactions