You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['']
Traceback (most recent call last):
File "main.py", line 34, in
image_text2img = fd_pipe.text2img(prompt=prompt, num_inference_steps=50).images[0]
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_mega.py", line 91, in text2img
output = temp_pipeline(
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 368, in call
text_embeddings = self._encode_prompt(
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 160, in _encode_prompt
text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/fastdeploy_utils.py", line 102, in call
return self.model.infer(inputs)
File "/root/miniconda3/lib/python3.8/site-packages/fastdeploy/runtime.py", line 64, in infer
return self._runtime.infer(data)
OSError:
ExternalError: CUBLAS error(7).
[Hint: Please search for the error code(7) on website (https://docs.nvidia.com/cuda/cublas/index.html#cublasstatus_t) to get Nvidia's official solution and advice about CUBLAS Error.] (at /home/fastdeploy/develop/paddle_build/v0.0.0/Paddle/paddle/fluid/inference/api/resource_manager.cc:282)
The text was updated successfully, but these errors were encountered:
请提出你的问题
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['']
Traceback (most recent call last):
File "main.py", line 34, in
image_text2img = fd_pipe.text2img(prompt=prompt, num_inference_steps=50).images[0]
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_mega.py", line 91, in text2img
output = temp_pipeline(
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 368, in call
text_embeddings = self._encode_prompt(
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 160, in _encode_prompt
text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/fastdeploy_utils.py", line 102, in call
return self.model.infer(inputs)
File "/root/miniconda3/lib/python3.8/site-packages/fastdeploy/runtime.py", line 64, in infer
return self._runtime.infer(data)
OSError:
C++ Traceback (most recent call last):
0 paddle::AnalysisPredictor::ZeroCopyRun()
1 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&)
2 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const
3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext*) const
4 paddle::framework::StructKernelImpl<paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>, void>::Compute(phi::KernelContext*)
5 paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>::Compute(paddle::framework::ExecutionContext const&) const
6 void phi::funcs::Blasphi::GPUContext::MatMul(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor*, float) const
7 void phi::funcs::Blasphi::GPUContext::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
8 phi::GPUContext::CublasCall(std::function<void (cublasContext*)> const&) const
9 phi::GPUContext::Impl::CublasCall(std::function<void (cublasContext*)> const&)::{lambda()#1}::operator()() const
10 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int)
11 phi::enforce::GetCurrentTraceBackStringabi:cxx11
Error Message Summary:
ExternalError: CUBLAS error(7).
[Hint: Please search for the error code(7) on website (https://docs.nvidia.com/cuda/cublas/index.html#cublasstatus_t) to get Nvidia's official solution and advice about CUBLAS Error.] (at /home/fastdeploy/develop/paddle_build/v0.0.0/Paddle/paddle/fluid/inference/api/resource_manager.cc:282)
The text was updated successfully, but these errors were encountered: