Skip to content

Misc. bug: [SYCL] Unexpected "setvars.sh has already been run" warning #13333

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
codayon opened this issue May 6, 2025 · 5 comments
Closed

Comments

@codayon
Copy link

codayon commented May 6, 2025

Name and Version

version: 5288 (a7366fa)
built with Intel(R) oneAPI DPC++/C++ Compiler 2025.1.1 (2025.1.1.20250418) for x86_64-unknown-linux-gnu

Linux ubuntu 6.11.0-25-generic #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

Documentation/Github

Command line

source /opt/intel/oneapi/setvars.sh
sycl-ls

./examples/sycl/build.sh

Problem description & steps to reproduce

While trying to build llama.cpp, I got an unexpected warning saying,

:: WARNING: setvars.sh has already been run. Skipping re-execution.
   To force a re-execution of setvars.sh, use the '--force' option.
   Using '--force' can result in excessive use of your environment variables.
  
usage: source setvars.sh [--force] [--config=file] [--help] [...]
  --force        Force setvars.sh to re-run, doing so may overload environment.
  --config=file  Customize env vars using a setvars.sh configuration file.
  --help         Display this help message and exit.
  ...            Additional args are passed to individual env/vars.sh scripts
                 and should follow this script's arguments.
  
  Some POSIX shells do not accept command-line options. In that case, you can pass
  command-line options via the SETVARS_ARGS environment variable. For example:
  
  $ SETVARS_ARGS="--config=config.txt" ; export SETVARS_ARGS
  $ . path/to/setvars.sh
  
  The SETVARS_ARGS environment variable is cleared on exiting setvars.sh.
  
The oneAPI toolkits no longer support 32-bit libraries, starting with the 2025.0 toolkit release. See the oneAPI release notes for more details.
  
./examples/sycl/build.sh: line 14: cmake: command not found
./examples/sycl/build.sh: line 23: cmake: command not found

I was following the guide for SYCL: Linux. I am getting this warning several times,

source /opt/intel/oneapi/setvars.sh

./examples/sycl/run-llama2.sh 0

Another thing I noticed, there is no mention about the required build packages i.e. cmake.

@Alcpz
Copy link
Collaborator

Alcpz commented May 6, 2025

The warning comes from the oneAPI toolkit because you sourced setvars.sh to verify the installation and then ran the example scripts.

If we removed the source from the example script, anyone who has already installed and tested the toolkit would run into problems in a fresh terminal unless they source setvars.sh again. I think the warning is preferable to an error.

If you think the docs would be improved by having a dependencies section, feel free to open a PR to improve them!

@codayon
Copy link
Author

codayon commented May 6, 2025

If we removed the source from the example script, anyone who has already installed and tested the toolkit would run into problems in a fresh terminal unless they source setvars.sh again. I think the warning is preferable to an error.

Ahh, maybe this is why I was facing an issue error saying,

while loading shared libraries: libsvml.so: cannot open shared object file: No such file or directory

I'm not sure if I missed anything related to this in that documentation. However, I think the documentation is not much newbie friendly and, can be improved.

If you think the docs would be improved by having a dependencies section, feel free to open a PR to improve them!

I would love to contribute! But the thing is, I'm running into an issue with model output, they are repeating the same words. I think I should search if there is any way to fix it. Otherwise, I may have to create another issue. So far, the models I have tested are, llama-2-7b.Q4_0 and qwen2.5-coder-0.5b-q8_0.

Image

Image

@Alcpz
Copy link
Collaborator

Alcpz commented May 6, 2025

while loading shared libraries: libsvml.so: cannot open shared object file: No such file or directory

Yes, that's the error you encounter when the setvars is not set.

I would love to contribute!

It's easy for us to miss "obvious" things since we are used to the project.

But the thing is, I'm running into an issue with model output, they are repeating the same words.

I'm not familiar with the frontend you are using. Did you try running llama-cli first to ensure the issue comes from llama.cpp itself (you may be correct in that there is something happening in the SYCL backend). IThe model with Q4_0 could be failing since we have an experimental optimization that has problems on Windows, but q8_0 should we woriking fine.

Could you try llama3-8b? I recently saw issues with llama2 q4_0 but I wasn't able to reproduce them.

@qnixsynapse
Copy link
Collaborator

If we removed the source from the example script, anyone who has already installed and tested the toolkit would run into problems in a fresh terminal unless they source setvars.sh again. I think the warning is preferable to an error.

Ahh, maybe this is why I was facing an issue error saying,

while loading shared libraries: libsvml.so: cannot open shared object file: No such file or directory

I'm not sure if I missed anything related to this in that documentation. However, I think the documentation is not much newbie friendly and, can be improved.

If you think the docs would be improved by having a dependencies section, feel free to open a PR to improve them!

I would love to contribute! But the thing is, I'm running into an issue with model output, they are repeating the same words. I think I should search if there is any way to fix it. Otherwise, I may have to create another issue. So far, the models I have tested are, llama-2-7b.Q4_0 and qwen2.5-coder-0.5b-q8_0.

Image

Image

Can you try a modern big enough model like gemma 3 4B to see if this problem persists?

@codayon
Copy link
Author

codayon commented May 6, 2025

Can you try a modern big enough model like gemma 3 4B to see if this problem persists?

Yup, I think the problem was with the models. Output from gemma-3-4b-it-Q4_0.gguf:

Image

@codayon codayon closed this as completed May 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants