Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: SciSharp/LLamaSharp
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: master
Choose a base ref
...
head repository: SciSharp/LLamaSharp
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: preview
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 3 commits
  • 1 file changed
  • 2 contributors

Commits on Dec 14, 2023

  1. Using CUDA while decoupling from the CUDA Toolkit as a hard-dependency

    Using CUDA while decoupling from the CUDA Toolkit as a hard-dependency
    
    Possible solution for #350
    
    Adding an alternative, fallback method of detection system-supported cuda version to make CUDA Toolkit installation optional. Technically, it uses output of the command line tool "nvidia-smi" (preinstalled with nvidia drivers), which also contains information about cuda version supported on system.
    
    Can confirm it works only on Windows, but I suppose that similar approach can be utilized for Linux and MacOS as well. Didn't touch the code for these 2 platforms, nevertheless.
    
    After that, cuda can be utilized simply by putting nvidia libraries from llama.cpp original repo, "bin-win-cublas-cu12.2.0-x64.zip" asset to the root folder of the built program. For example, to folder: "\LLama.Examples\bin\Debug\net8.0\".
    Onkitova committed Dec 14, 2023
    Configuration menu
    Copy the full SHA
    4f1bda1 View commit details
    Browse the repository at this point in the history

Commits on Dec 15, 2023

  1. Merge pull request #365 from Onkitova/preview

    feat: using CUDA while decoupling from the CUDA Toolkit as a hard-dependency
    AsakusaRinne authored Dec 15, 2023
    Configuration menu
    Copy the full SHA
    b79387f View commit details
    Browse the repository at this point in the history

Commits on Dec 16, 2023

  1. Configuration menu
    Copy the full SHA
    609e296 View commit details
    Browse the repository at this point in the history
Loading