-
Notifications
You must be signed in to change notification settings - Fork 426
Description
Prerequisite
- I have searched Issues and Discussions but cannot get the expected help.
- The bug has not been fixed in the latest version(https://github.com/open-mmlab/mmengine).
Environment
OrderedDict([('sys.platform', 'linux'), ('Python', '3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]'), ('CUDA available', True), ('numpy_random_seed', 2147483648), ('GPU 0', 'Quadro P620'), ('CUDA_HOME', None), ('GCC', 'x86_64-conda_cos7-linux-gnu-gcc (Anaconda gcc) 11.2.0'), ('PyTorch', '1.12.1'), ('PyTorch compiling details', 'PyTorch built with:\n - GCC 9.3\n - C++ Version: 201402\n - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\n - LAPACK is enabled (usually provided by MKL)\n - NNPACK is enabled\n - CPU capability usage: AVX2\n - CUDA Runtime 11.6\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37\n - CuDNN 8.3.2 (built against CUDA 11.5)\n - Magma 2.6.1\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.6, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \n'), ('TorchVision', '0.13.1'), ('OpenCV', '4.7.0'), ('MMEngine', '0.7.2')])
Reproduces the problem - code sample
Technically, this is NOT a bug
A useful tool get_model_complexity_info() is provided to measure the FLOPs, #params, etc.
But currently, this function considers input_shape as a required positional argument.
import torch
import torch.nn as nn
from mmengine.analysis import get_model_complexity_info
class mymodel(nn.Module):
def __init__(self) -> None:
super().__init__()
self.l=nn.Linear(in_features=5,out_features=6)
def forward(self, x):
out=self.l(x)
return out
def main():
input=torch.randn(5)
model=mymodel()
complexity=get_model_complexity_info(model=model, inputs=input)
if __name__=="__main__":
main()Reproduces the problem - command or script
Save above code to main.py and run
python main.pyReproduces the problem - error message
TypeError: get_model_complexity_info() missing 1 required positional argument: 'input_shape'Additional information
To solve this problem, one should specify input_shape=None, as in this line. This is neither elegant nor necessary.
This is not reasonable
The arg input_shape is used to construct an input tensor only if inputs is None. See this line.
This function should expect the users provide only one of input_shape and inputs.