Skip to content

ML.Net - The first dimension of paddings must be the rank of inputs[4,2] [1,1,320,320,3] #5364

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
crazyoutlook opened this issue Aug 25, 2020 · 6 comments
Labels
bug Something isn't working image Bugs related image datatype tasks need info This issue needs more info before triage P2 Priority of the issue for triage purpose: Needs to be fixed at some point.

Comments

@crazyoutlook
Copy link

System information
OS version/distro: Windows 10 Pro
.NET Version (eg., dotnet --info): dotnet framework 4.7
Issue :
We are working on how to consume the tensorflow model in .Net using ML.NET. We are using below tutorial as reference :

Tutorial Link : https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/image-classification

We tested with model that is used in the Tutorial and it worked fine. But, when we replace tutorial model with our tensorflow model (object detection model which we have exported from Azure Custom Vision), it is throwing an Exception saying -TensorflowException: The first dimension of paddings must be the rank of inputs[4,2] [1,1,320,320,3] [[{{node conv1/pad_size}}]]
The same custom vision model works fine when consumed in Python code.

Source code / logs
Details:

Project Name : TransferLearningTF
Class name : program.cs
Method Name : GenerateModel

Code :

IEstimator pipeline = mlContext.Transforms.LoadImages(outputColumnName: "image_tensor", imageFolder: _imagesFolder, inputColumnName: nameof(ImageData.ImagePath))
.Append(mlContext.Transforms.ResizeImages(outputColumnName: "image_tensor", imageWidth: InceptionSettings.ImageWidth, imageHeight: InceptionSettings.ImageHeight, inputColumnName: "image_tensor"))
.Append(mlContext.Transforms.ExtractPixels(outputColumnName: "image_tensor"))
.Append(mlContext.Model.LoadTensorFlowModel(_inceptionTensorFlowModel)
.ScoreTensorFlowModel(outputColumnNames: new[] { "detected_boxes", "detected_scores", "detected_classes" }, inputColumnNames: new[] { "image_tensor" }, addBatchDimensionInput: true))
.AppendCacheCheckpoint(mlContext);

		IDataView trainingData = mlContext.Data.LoadFromTextFile<ImageData>(path: _trainTagsTsv, hasHeader: false);
        ITransformer model = pipeline.Fit(trainingData);
        IDataView testData = mlContext.Data.LoadFromTextFile<ImageData>(path: _testTagsTsv, hasHeader: false);
        IDataView predictions = model.Transform(testData);
        IEnumerable<ImagePrediction> imagePredictionData = mlContext.Data.CreateEnumerable<ImagePrediction>(predictions, true);

Exception Details :

TensorflowException: The first dimension of paddings must be the rank of inputs[4,2] [1,1,320,320,3]
[[{{node conv1/pad_size}}]]

@michaelgsharp michaelgsharp added bug Something isn't working image Bugs related image datatype tasks P1 Priority of the issue for triage purpose: Needs to be fixed soon. labels Aug 25, 2020
@michaelgsharp
Copy link
Member

Can you share any of the training data or model? If not, what are the dimensions of the images you are using, and what is the expected input/shape to the tensorflow model?

Is the python code the tensorflow python api? Or what package is it? In the python code that works, do you have to do any reshaping or anything? Or is it working just with directly calling the tensorflow model?

@michaelgsharp michaelgsharp added the need info This issue needs more info before triage label Aug 25, 2020
@crazyoutlook
Copy link
Author

Hi,

Package is Azure custom vision tensorflow exported file.

I checked my model using "Netron" app and it shows the input image is of size 320, 320. I am resizing to that dimension and then building pipeline. I have attached screenshot of the Model as seen in Netron app.

I am getting error while using ML.Net. I am using ML.Net C# code to consume this model. I have exported azure custom vision locally and then consuming it.

Same model works fine when used with Python tensorflow code with same resizing of 320, 320. This is also same model which is exported form azure custom vision and being consumed locally.

Basically same model works fine with my python application but giver error when used with C# ML.Net

Regards,
Amit

img1
img2

@michaelgsharp
Copy link
Member

Without the model or code it is harder to look into, but here is my guess. ML.Net always adds an extra 1 to the input dimensions, meaning a batch/row size of 1.

So when you are doing the resize for ML.Net, try resizing to 1 x 320 x 320 and see what happens. Let me know after you are able to try that.

@michaelgsharp michaelgsharp added P2 Priority of the issue for triage purpose: Needs to be fixed at some point. and removed P1 Priority of the issue for triage purpose: Needs to be fixed soon. labels Aug 31, 2020
@crazyoutlook
Copy link
Author

Hi Michael,
I will upload working code along with the ML Model for your reference. I am resizing input at 320 x 320 level but still getting same error.

@michaelgsharp
Copy link
Member

Were you able to upload the code/model?

@frank-dong-ms-zz
Copy link
Contributor

@crazyoutlook we didn't hear back with repro code/model for a long time so close this issue, feel free to reopen if you able to provide more details, thanks.

@ghost ghost locked as resolved and limited conversation to collaborators Mar 18, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working image Bugs related image datatype tasks need info This issue needs more info before triage P2 Priority of the issue for triage purpose: Needs to be fixed at some point.
Projects
None yet
Development

No branches or pull requests

3 participants