A chat interface for OpenAI and Ollama models featuring chat streaming, local caching, and customisable model values.
OpenAI models utilise an OpenAI developer key which allows you to pay per token.
Check out the demo here
- Code highlighting on input and reponse
- LLAVA model support (vision models)
- Easy to share a model with just a link
- Completely local. All your converations are stored on your browser, not on some server
- Custom model settings
- PWA for lightweight installation on mobile and desktop
Ollama requires you to allow outside connections by setting the OLLAMA_ORIGINS env variable. I've been testing with *, but setting it to ai.chat.mc.hzuccon.com or harvmaster.github.io depending on where you're accessing it from (or if youre self-hosting, your domain should work). For more information see here
- Fix multiple root elements in template (src/components/ChatMessage/ChatmessageChunk.vue)
- Explore continue prompt (src/components/ChatMessage/ChatMessage.vue)
yarn
# or
npm installThe service can be launched in dev mode and is accessable at http://localhost:9200/#/
quasar devIn dev mode, the HMR_PORT environment variable can be set to allow for Hot_Module_Reloading when the service is sitting behind a sub/domain.
environment:
- HMR_PORT=443
yarn lint
# or
npm run lintyarn format
# or
npm run formatquasar build

