Skip to content

[Bug]: 关于多线程使用paddle的UIE模型推理 #5410

Closed
@chinesejunzai12

Description

@chinesejunzai12

软件环境

- paddlepaddle:2.4.1
- paddlepaddle-gpu: 2.4.1.post112
- paddlenlp: 2.5.0

重复问题

  • I have searched the existing issues

错误描述

使用多线程在GPU上进行推理, 实验过程中只成功了一次, 但是大部分都会报错, 比如起3个线程, 最后只会有一个线程工作, 其他两个线程报的错误是一样的, 报错如下:
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/short_name_uie_infer_gpu.py", line 115, in main
    predictor = UIEPredictor(args)
  File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/uie_predictor.py", line 100, in __init__
    device_id=args.device_id if args.device == "gpu" else 0,
  File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/uie_predictor.py", line 61, in __init__
    self.predictor = ort.InferenceSession(onnx_model, sess_options=sess_options, providers=providers)
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ../checkpoint_厂商简称_96/model_best/fp16_model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:130 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) ModelProto does not have a graph.
请解答一下疑问, 谢谢

稳定复现步骤 & 代码

自己写的多线程代码, 之前使用的好好的, 现在使用不行了, 直接使用
python ../deploy/python/short_name_uie_infer_gpu.py
--model_path_prefix ../checkpoint/model_best/model
--use_fp16
--device_id 0
--multilingual
--source_table table1
--sink_table table2
--workers 2

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingtriage

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions