You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
使用多线程在GPU上进行推理, 实验过程中只成功了一次, 但是大部分都会报错, 比如起3个线程, 最后只会有一个线程工作, 其他两个线程报的错误是一样的, 报错如下:
Process SpawnProcess-1:
Traceback (most recent call last):
File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/short_name_uie_infer_gpu.py", line 115, in main
predictor = UIEPredictor(args)
File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/uie_predictor.py", line 100, in __init__
device_id=args.device_id if args.device == "gpu" else 0,
File "/data/disk-2T/houxiaojun/PaddleNLP-develop/model_zoo/uie/deploy/python/uie_predictor.py", line 61, in __init__
self.predictor = ort.InferenceSession(onnx_model, sess_options=sess_options, providers=providers)
File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/data/disk-2T/houxiaojun/anaconda3/envs/uie_env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ../checkpoint_厂商简称_96/model_best/fp16_model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:130 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) ModelProto does not have a graph.
请解答一下疑问, 谢谢
软件环境
重复问题
错误描述
稳定复现步骤 & 代码
自己写的多线程代码, 之前使用的好好的, 现在使用不行了, 直接使用
python ../deploy/python/short_name_uie_infer_gpu.py
--model_path_prefix ../checkpoint/model_best/model
--use_fp16
--device_id 0
--multilingual
--source_table table1
--sink_table table2
--workers 2
The text was updated successfully, but these errors were encountered: