Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage of image branch and possible scaling bug #15

Open
ggalan87 opened this issue Nov 16, 2018 · 1 comment
Open

Usage of image branch and possible scaling bug #15

ggalan87 opened this issue Nov 16, 2018 · 1 comment

Comments

@ggalan87
Copy link

ggalan87 commented Nov 16, 2018

I am interested in investigating results with and without the image branch of the network. As far as I am concerned you do not provide checkpoint which utilizes such a branch, so I trained the network enabling this branch, on my own.

First of all, I want to know if I am correct that corner_acc.npy, topdown_acc.npy which you provide within your example are the weights which correspond to pretraining of DRN and HG networks, thus we utilize them as-is for both training and validation/testing purposes.

Next I want to point out a possible bug related to image features which is the following. I realized that within RecordWriterTango.py, RGB values are scaled twice.

One time during loading:

color = color.astype(np.float32) / 255

Second time during processing:

points[:, 3:] = points[:, 3:] / 255 - 0.5

This results to RGB values within a range
(-0.5, -0.49607843)

It can be reproduced with the following snippet (similar for training records)

record_filepath ='Tango_val.tfrecords'
dataset = getDatasetVal([record_filepath], '', True, 1)
iterator = dataset.make_one_shot_iterator()
input_dict, gt_dict = iterator.get_next()

pt = input_dict['points'][0]
points = tf.Session().run([pt])[0]
RGB = points[:, 3:6]
print(np.amin(RGB), np.amax(RGB))

Could such issue affect how the values are interpreted by the network? Is this related to already pretrained weights?

Thanks in advance

@art-programmer
Copy link
Owner

You are right on both points. It seems that the scaling indeed happens twice, which is a bug. I am not sure how much will fixing the scaling issue benefit the network. Need to test later. The pretrained weights correspond to the data provided in the .tfrecords.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants