-
I wanted to train a change detection model and use it using rastervision 0.30.1 (pytorch-latest) Using this 2 discussions and the @AdeelH post: https://colab.research.google.com/drive/1lZiyZhjgiLMK29Hk5rIBQv1TEL6_WNjV With github_repo='AdeelH/pytorch-fpn:0.3' and batch size 4 due to computing limitations. The problem comes when I try to make predictions on a new pair of pictures:
I have the following ScenePredictor script: from pathlib import Path
from rastervision.core.data import (
ClassConfig, Scene, RasterioSource, MultiRasterSource, StatsTransformer, ReclassTransformer, SemanticSegmentationLabelStore, StatsTransformer, ReclassTransformerConfig)
from rastervision.core.predictor import ScenePredictor
from rastervision.core.rv_pipeline import SemanticSegmentationPredictOptions
from rastervision.core.data.crs_transformer import IdentityCRSTransformer
from rastervision.core.raster_stats import RasterStats
def get_prediction_config(out_dir: str, bundle_path: str, uri_1: str, uri_2: str, scene_id: str, stats_uri:str) -> ScenePredictor:
######################################
# Configure the Scene for New Data
######################################
class_cfg = ClassConfig(
names=['no change', 'change'],
colors=['lightgray', 'red'],
null_class='no change')
new_scene = make_scene(scene_id, uri_1, uri_2, class_cfg, out_dir, stats_uri)
########################
# Configure the Predictor
########################
predict_options = SemanticSegmentationPredictOptions(
chip_sz=256
)
predictor = ScenePredictor(
model_bundle_uri=bundle_path,
predict_options=predict_options
)
return predictor, new_scene
def make_scene(scene_id: str, uri_1: str, uri_2: str, class_cfg: ClassConfig, out_dir: str, stats_uri: str) -> Scene:
raster_source_1 = RasterioSource(uris=[uri_1])
raster_source_2 = RasterioSource(uris=[uri_2])
stats = RasterStats.load(stats_uri)
# Define transformer for the combined raster source
combined_stats_transformer = StatsTransformer(means=stats.means, stds=stats.stds)
raster_source = MultiRasterSource(
raster_sources=[raster_source_1, raster_source_2],
primary_source_idx=1,
raster_transformers=[combined_stats_transformer]
)
crs_transformer = IdentityCRSTransformer()
label_store = SemanticSegmentationLabelStore(
uri=out_dir, crs_transformer=crs_transformer, class_config=class_cfg)
scene = Scene(
id=scene_id,
raster_source=raster_source,
label_store=label_store
)
return scene
if __name__ == '__main__':
out_dir = './predictions_cd/out.json'
bundle_path = './blog_0/bundle/model-bundle.zip'
uri_1 = './change_imgs/pre.tif'
uri_2 = './change_imgs/post.tif'
scene_id = 'new_scene_id'
stats_uri = './blog_0/analyze/stats/train_scenes/stats.json'
# Set up the prediction configuration
predictor, scene = get_prediction_config(out_dir, bundle_path, uri_1, uri_2, scene_id, stats_uri)
# Run the prediction
predictor.predict_scene(scene) I am using the model bundled as described in the previous discussions. I am a little bit stuck, thanks for the help! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
The error seems to suggest that your input to the model has 6 bands instead of the 26 that the model expects. Are you perhaps feeding in RGB images? Also, unrelated to the error, but you should probably be passing |
Beta Was this translation helpful? Give feedback.
The stats transformer does not change the number of channels; it just normalizes the pixel values.
What you can do instead is train your model on just the RGB channels. When defining the raster sources in your training code, pass
channel_order=[3, 2, 1]
. This corresponds to the red, green, and blue channels in the 13-band images.Also keep in mind that the images in the OSCD dataset are Sentinel-2 images. A model trained on those images will work best on other Sentinel-2 images. If you feed in images from a different satellite that have a different resolution, you will likely not get very good results.