Skip to content

Conversation

@jhen0409
Copy link
Collaborator

@jhen0409 jhen0409 commented Oct 7, 2023

I'm doing the same thing as I do in ggml-org/whisper.cpp#1293 (review), this allows it to load compiled default.metallib, and use the source as a fallback.

For easier test by command, we can use xcrun -sdk macosx metal ggml-metal.metal to compile default.metallib, and run ./main.

I use SWIFTPM_MODULE_BUNDLE to get bundle for swift package, so we can merge the code on both sides.

@jhen0409 jhen0409 changed the title metal : improve library load & reuse code for swift package metal : support default.metallib load & reuse code for swift package Oct 7, 2023
@ggerganov ggerganov merged commit c26765a into ggml-org:master Oct 7, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 12, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  py : change version of numpy requirement to 1.24.4 (ggml-org#3515)
  quantize : fail fast on write errors (ggml-org#3521)
  metal : support default.metallib load & reuse code for swift package (ggml-org#3522)
  llm : support Adept Persimmon 8B (ggml-org#3410)
  Fix for ggml-org#3454 (ggml-org#3455)
  readme : update models, cuda + ppl instructions (ggml-org#3510)
  server : docs fix default values and add n_probs (ggml-org#3506)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants