We are trying to enable the neuropod python backend with Neuropod JNI now. After enabling python isolation, since the packaged python environment doesn't have torch pre-installed, we would meet No module named "Torch" when loading the torch model.
Including a requirements.lock file could resolve the issue but this would cause installation when loading the model. This might be a problem when loading models on a large number of machines simultaneously. You also mentioned in code that this is problematic when running multiple python models in a single process and it's only intended to work when using OPE. So, I am wondering is it possible to pre-install the necessary package like torch to the isolated python environment before loading the model and keep the size of the python backend small at the same time?