-
Notifications
You must be signed in to change notification settings - Fork 421
Insights: SciSharp/LLamaSharp
Overview
Could not load contribution data
Please try again later
7 Pull requests merged by 3 people
-
Update to M.E.AI 9.4.3-preview.1.25230.7
#1182 merged
May 6, 2025 -
Bump Whisper.net from 1.7.4 to 1.8.1
#1143 merged
May 3, 2025 -
Bump Spectre.Console from 0.49.1 to 0.50.0
#1175 merged
May 3, 2025 -
Bump Whisper.net.Runtime from 1.7.4 to 1.8.1
#1145 merged
May 3, 2025 -
Bump Microsoft.SemanticKernel.Abstractions from 1.44.0 to 1.48.0
#1176 merged
May 3, 2025 -
Bump Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation from 8.0.12 to 8.0.15
#1177 merged
May 3, 2025 -
Update LLamaEmbedder, Examples packages, and KernelMemory examples
#1170 merged
May 3, 2025
3 Pull requests opened by 3 people
-
MTMD - Remove Llava
#1178 opened
May 1, 2025 -
May Binary Update
#1179 opened
May 1, 2025 -
Feat/tensor override
#1180 opened
May 2, 2025
2 Issues closed by 2 people
-
[BUG]: CUDA errors with two GPUs (multiple parallel requests)
#1091 closed
May 7, 2025 -
[BUG]: Tokenization in 0.14.0 adds spaces
#856 closed
May 2, 2025
1 Issue opened by 1 person
-
[Feature]: Using xcframework instead of using dylib
#1181 opened
May 6, 2025
56 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
[BUG]: Error loading the LLava model
#1136 commented on
May 1, 2025 • 0 new comments -
[BUG]: When using large models with the GPU the code crashes with cannot allocate kvcache
#759 commented on
May 2, 2025 • 0 new comments -
[Feature]: SemanticKernel FuctionCall
#758 commented on
May 2, 2025 • 0 new comments -
Split the main package
#754 commented on
May 2, 2025 • 0 new comments -
Unable to load SYCL compiled backend
#746 commented on
May 2, 2025 • 0 new comments -
开局提问有几率会触发无限空行回复
#745 commented on
May 2, 2025 • 0 new comments -
[BUG]: Fail to Load Model with Chinese Model Path
#744 commented on
May 2, 2025 • 0 new comments -
[Feature]: 不同的LLM模型,代码要以怎样的方式融合到项目里
#739 commented on
May 2, 2025 • 0 new comments -
Add debug mode of LLamaSharp
#732 commented on
May 3, 2025 • 0 new comments -
Add unit test about long context
#731 commented on
May 3, 2025 • 0 new comments -
[BUG]: WSL2 has problem running LLamaSharp with cuda11
#727 commented on
May 3, 2025 • 0 new comments -
[BUG]: Answer stop abruptly after contextsize, even with limiting prompt size
#722 commented on
May 3, 2025 • 0 new comments -
[BUG]: Linux cuda version detection could be incorrect
#724 commented on
May 3, 2025 • 0 new comments -
Take multiple chat templates into account
#705 commented on
May 3, 2025 • 0 new comments -
[CI] Add more unit test to ensure the the outputs are reasonable
#704 commented on
May 3, 2025 • 0 new comments -
Namespace should be consistent
#693 commented on
May 3, 2025 • 0 new comments -
Unknown model architecture: qwen3
#1173 commented on
May 3, 2025 • 0 new comments -
How do I continously print the answer word for word when using document ingestion with kernel memory?
#687 commented on
May 4, 2025 • 0 new comments -
System.TypeInitializationException: 'The type initializer for 'LLama.Native.NativeApi' threw an exception.'
#686 commented on
May 4, 2025 • 0 new comments -
[Proposal] Backend-free support
#670 commented on
May 4, 2025 • 0 new comments -
Debian 12 x LLamaSharp 0.11.2 Crashed Silently
#668 commented on
May 4, 2025 • 0 new comments -
IndexOutOfRangeException when calling IKernelMemory.AskAsync()
#661 commented on
May 4, 2025 • 0 new comments -
[Proposal] Refactor the mid-level and high-level implementations of LLamaSharp
#684 commented on
May 4, 2025 • 0 new comments -
Add a best practice example for RAG
#648 commented on
May 5, 2025 • 0 new comments -
AccessViolationException
#654 commented on
May 5, 2025 • 0 new comments -
[Native Lib] Support specifying LLaVA native library path
#644 commented on
May 5, 2025 • 0 new comments -
Unable to use lora in llamasharp but can use it in llama.cpp
#618 commented on
May 5, 2025 • 0 new comments -
SemanticKernel ChatCompletion is Stateless
#614 commented on
May 5, 2025 • 0 new comments -
Godot game engine example
#608 commented on
May 5, 2025 • 0 new comments -
[Feature] Support GritLM to get embeddings
#646 commented on
May 5, 2025 • 0 new comments -
Llama.web app published into iis windows 64bit server, after deployment model values not loaded from appsettings
#597 commented on
May 6, 2025 • 0 new comments -
Cannot add a user message after another user message (Parameter message
#585 commented on
May 6, 2025 • 0 new comments -
Separating and Streamlining llama/llava binaries Suggestion
#583 commented on
May 6, 2025 • 0 new comments -
[Kernel Memory] Integrate TextGenerationOptions to LLamaSharp.kernel-memory
#580 commented on
May 6, 2025 • 0 new comments -
Thread Safety in llama.cpp
#596 commented on
May 6, 2025 • 0 new comments -
Examples don't run with CUDA12
#599 commented on
May 6, 2025 • 0 new comments -
Consider adding Windows on ARM build of llama.dll to LLamaSharp.Backend.Cpu
#600 commented on
May 6, 2025 • 0 new comments -
Stateless executor doesn't work with LlamaSharp 0.10 & NET 8.0
#578 commented on
May 7, 2025 • 0 new comments -
How to accelerate running speed in CPU environment?
#562 commented on
May 7, 2025 • 0 new comments -
How to use embedding correctly
#547 commented on
May 7, 2025 • 0 new comments -
Possibly useful for documentation: Article by us on Medium about building a Console App with .Net 8.0
#543 commented on
May 7, 2025 • 0 new comments -
ZLUDA Support
#537 commented on
May 7, 2025 • 0 new comments -
NativeApi: `TryLoadLibrary()` can fail for some systems
#524 commented on
May 7, 2025 • 0 new comments -
RuntimeError: for .NET framework 4.7.2.
#508 commented on
May 7, 2025 • 0 new comments -
[BUG]: Wrong behavior on InferenceParams.AntiPrompts
#1056 commented on
May 7, 2025 • 0 new comments -
llamasharp.backend.cpu is missing NuGet package README file
#506 commented on
May 8, 2025 • 0 new comments -
Avoid declaring constructors with parameters if the properties of the type can be obtained from configuration settings.
#498 commented on
May 8, 2025 • 0 new comments -
Use NerdBank.GitVersioning for versioning
#490 commented on
May 8, 2025 • 0 new comments -
System.Runtime.InteropServices.MarshalDirectiveException: 'Method's type signature is not PInvoke compatible.'
#484 commented on
May 8, 2025 • 0 new comments -
Using this repo in unity 3d.
#482 commented on
May 8, 2025 • 0 new comments -
Wrong result when change to other model.
#481 commented on
May 8, 2025 • 0 new comments -
Use NBGV for versioning
#491 commented on
May 8, 2025 • 0 new comments -
Introduce ChatHistory interface
#669 commented on
May 4, 2025 • 0 new comments -
Automatic Solution Generator - Work in progress
#676 commented on
May 4, 2025 • 0 new comments -
feat: support dynamic native library loading in .NET standard 2.0.
#738 commented on
May 2, 2025 • 0 new comments -
add LLamaReranker and tests
#1150 commented on
May 7, 2025 • 0 new comments