You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am interested in investigating results with and without the image branch of the network. As far as I am concerned you do not provide checkpoint which utilizes such a branch, so I trained the network enabling this branch, on my own.
First of all, I want to know if I am correct that corner_acc.npy, topdown_acc.npy which you provide within your example are the weights which correspond to pretraining of DRN and HG networks, thus we utilize them as-is for both training and validation/testing purposes.
Next I want to point out a possible bug related to image features which is the following. I realized that within RecordWriterTango.py, RGB values are scaled twice.
You are right on both points. It seems that the scaling indeed happens twice, which is a bug. I am not sure how much will fixing the scaling issue benefit the network. Need to test later. The pretrained weights correspond to the data provided in the .tfrecords.
I am interested in investigating results with and without the image branch of the network. As far as I am concerned you do not provide checkpoint which utilizes such a branch, so I trained the network enabling this branch, on my own.
First of all, I want to know if I am correct that corner_acc.npy, topdown_acc.npy which you provide within your example are the weights which correspond to pretraining of DRN and HG networks, thus we utilize them as-is for both training and validation/testing purposes.
Next I want to point out a possible bug related to image features which is the following. I realized that within RecordWriterTango.py, RGB values are scaled twice.
One time during loading:
FloorNet/RecordWriterTango.py
Line 99 in e7bd879
Second time during processing:
FloorNet/RecordWriterTango.py
Line 334 in e7bd879
This results to RGB values within a range
(-0.5, -0.49607843)
It can be reproduced with the following snippet (similar for training records)
Could such issue affect how the values are interpreted by the network? Is this related to already pretrained weights?
Thanks in advance
The text was updated successfully, but these errors were encountered: