Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training data used with saved model #27

Open
roya0045 opened this issue May 15, 2021 · 3 comments
Open

Training data used with saved model #27

roya0045 opened this issue May 15, 2021 · 3 comments
Labels
question Further information is requested

Comments

@roya0045
Copy link

Greetings. I'm curious to know if you could share more insight on the training dataset and methodology.

I would guess that the High-resolution dataset and the lower resolution one were taken on the same day?

Were spatial and other temporal (such as mentionned previously) criterions taken into account with the training dataset to have the model generalize well accross the globe?

@remicres
Copy link
Owner

remicres commented May 18, 2021

Hi @roya0045

I would guess that the High-resolution dataset and the lower resolution one were taken on the same day?

Yes. The provided pre-trained model has been trained over a single (Spot-7, Sentinel-2) image couple.

@remicres remicres added the question Further information is requested label May 18, 2021
@quizz0n
Copy link

quizz0n commented May 18, 2021

Hi,
I'm also interested in this topic. Are there any limitations regarding the source of HR image? In your example you mention NIR band, can Orthoimagery be used also?

@remicres
Copy link
Owner

remicres commented May 18, 2021

Hi,

Yes, you can use any source and/or target, with any number of spectral bands. Theoretically you can even train a deep net that uses source and/or target of different modality (e.g. SAR at input, optical at output)! The deep net will try to map from one domain to the other.
So yes, an ortho image would totally be a "good" target.

Just keep in mind that the perceptual loss (The VGG loss) applies only on the first 3 channels, and it intended to be used with RGB target images.
If your target image does not have RGB in first channels, it might be better to not use perceptual loss.

The case of (source, target) = (Sentinel-2, Spot-6/7) is friendly because the spectral bands are close.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants