Skip to content

[Question]: 当我在使用PPDiffuser时调用fastdeploy出现了如下的错误,请问如何解决 #6437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Siri-2001 opened this issue Jul 19, 2023 · 1 comment
Assignees
Labels
question Further information is requested triage

Comments

@Siri-2001
Copy link

请提出你的问题

The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['']
Traceback (most recent call last):
File "main.py", line 34, in
image_text2img = fd_pipe.text2img(prompt=prompt, num_inference_steps=50).images[0]
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_mega.py", line 91, in text2img
output = temp_pipeline(
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 368, in call
text_embeddings = self._encode_prompt(
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 160, in _encode_prompt
text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/fastdeploy_utils.py", line 102, in call
return self.model.infer(inputs)
File "/root/miniconda3/lib/python3.8/site-packages/fastdeploy/runtime.py", line 64, in infer
return self._runtime.infer(data)
OSError:


C++ Traceback (most recent call last):

0 paddle::AnalysisPredictor::ZeroCopyRun()
1 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&)
2 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const
3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext*) const
4 paddle::framework::StructKernelImpl<paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>, void>::Compute(phi::KernelContext*)
5 paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>::Compute(paddle::framework::ExecutionContext const&) const
6 void phi::funcs::Blasphi::GPUContext::MatMul(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor*, float) const
7 void phi::funcs::Blasphi::GPUContext::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
8 phi::GPUContext::CublasCall(std::function<void (cublasContext*)> const&) const
9 phi::GPUContext::Impl::CublasCall(std::function<void (cublasContext*)> const&)::{lambda()#1}::operator()() const
10 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int)
11 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

ExternalError: CUBLAS error(7).
[Hint: Please search for the error code(7) on website (https://docs.nvidia.com/cuda/cublas/index.html#cublasstatus_t) to get Nvidia's official solution and advice about CUBLAS Error.] (at /home/fastdeploy/develop/paddle_build/v0.0.0/Paddle/paddle/fluid/inference/api/resource_manager.cc:282)

@Siri-2001 Siri-2001 added the question Further information is requested label Jul 19, 2023
@w5688414
Copy link
Contributor

w5688414 commented May 7, 2024

PPDiffuser相关的内容请移步paddlemix。

https://github.com/PaddlePaddle/PaddleMIX

@paddle-bot paddle-bot bot closed this as completed May 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested triage
Projects
None yet
Development

No branches or pull requests

3 participants