Skip to content

[Bug]: 关于多线程使用paddle的UIE模型推理 #5410

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
chinesejunzai12 opened this issue Mar 24, 2023 · 1 comment
Closed
1 task done

[Bug]: 关于多线程使用paddle的UIE模型推理 #5410

chinesejunzai12 opened this issue Mar 24, 2023 · 1 comment
Assignees
Labels
bug Something isn't working triage

Comments

@chinesejunzai12
Copy link

软件环境

- paddlepaddle:2.4.1
- paddlepaddle-gpu: 2.4.1.post112
- paddlenlp: 2.5.0

重复问题

  • I have searched the existing issues

错误描述

使用多线程在GPU上进行推理, 实验过程中只成功了一次, 但是大部分都会报错, 比如起3个线程, 最后只会有一个线程工作, 其他两个线程报的错误是一样的, 报错如下:
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/short_name_uie_infer_gpu.py", line 115, in main
    predictor = UIEPredictor(args)
  File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/uie_predictor.py", line 100, in __init__
    device_id=args.device_id if args.device == "gpu" else 0,
  File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/uie_predictor.py", line 61, in __init__
    self.predictor = ort.InferenceSession(onnx_model, sess_options=sess_options, providers=providers)
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ../checkpoint_厂商简称_96/model_best/fp16_model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:130 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) ModelProto does not have a graph.
请解答一下疑问, 谢谢

稳定复现步骤 & 代码

自己写的多线程代码, 之前使用的好好的, 现在使用不行了, 直接使用
python ../deploy/python/short_name_uie_infer_gpu.py
--model_path_prefix ../checkpoint/model_best/model
--use_fp16
--device_id 0
--multilingual
--source_table table1
--sink_table table2
--workers 2

@chinesejunzai12 chinesejunzai12 added the bug Something isn't working label Mar 24, 2023
@w5688414
Copy link
Contributor

w5688414 commented May 8, 2024

原生不支持并发,推荐fastdeploy UIE服务化部署, 链接:https://github.com/PaddlePaddle/FastDeploy/blob/develop/examples/text/uie/README_CN.md?plain=1

@paddle-bot paddle-bot bot closed this as completed May 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage
Projects
None yet
Development

No branches or pull requests

3 participants