-
Hello, I was wondering if the models require specific CRS settings in order to work correctly? I have not found any information about this. The demos on huggingface seem to work with different CRS and the example files also seem to have different CRSs, but just want to make sure if this is correct or if the models will work better if the correct CRS is used. Best regards, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hi, @Atmoboran. Could you provide more information ? For what models are you specifically interested in ? |
Beta Was this translation helpful? Give feedback.
-
Hi @Atmoboran! |
Beta Was this translation helpful? Give feedback.
-
Just to add up to this conversation: Here is my investigation of all the example files of the various Prithvi EO-2.0 demos (https://huggingface.co/ibm-nasa-geospatial). So we can see the differences in the used CRSs. So the question is: if model_X is pre-trained with CRS_1, will the performance be worse if it is fine-tuned with CRS_2 or is the model able to understand geolocation independent of the CRS choice/ is it able to do the geo-transformation itself? This is what the technical report says about geolocation: "For the 3D positional encodings, we first generate 1D sin/cos encodings individually for time, height, and width dimensions |
Beta Was this translation helpful? Give feedback.
Thanks for the clarification and suggestions :) these are good points.
For inference with the pre-trained model without any fine-tuning, I would recommend using HLS data similar to the examples in the demo (similar CRS and image sizes, but the model should be able to handle some differences there) and keep the 6 bands we used in pre-training (RGB, NIR, SWIR1, SWIR2).
For fine-tuning, if you are not freezing the encoder, you have more freedom to change the data characteristics more significantly as you are adapting the model to the new data. However, as I mentioned, if the data is very different indeed, it might take several epochs for the model to learn this adaptation. We fine-tuned Prit…