-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testing based on random weights also giving the same results as in paper #86
Comments
Is the data you use KITTI? If it is KITTI, I guess the results of training and non-training are almost the same because the initial values of cov_lat and cov_up in the code are very good and are tailor-made for KITTI. After I used my own data set, when the training epoch reached 3000, compared with no training, the result trajectory can be seen to be significantly different. |
@rendoudoudou Yes i am using KITTI dataset. So, if i need to tet the model bsed on adding more convolution layers how to do it? and apart from cov_lat and cov_up , what other intial values play a major role for converjange of model? |
Sorry, I don't know CNN very well. I don't know how to add more convolutional layers. I guess it is modified in the class MesNet in utils_torch_filter.py, but I have not tried it. I am using the convolutional layers provided by the author of the code. |
I think the initial parameters in class KITTIParameters in main_kitti.py are very important. These parameters provided by the author are applicable to the KITTI dataset. The most important of these parameters are cov_lat and cov_up, because these two parameters are related to the covariance matrix Nn trained by the author. |
@rendoudoudou Thanks for sharing your experience. and one more question is. even if i change the intial values and try to run the code but at some pointy i will againg converge to the same values as in paper during tuning. Is there any other wat i can use to test the code on same dataset ? |
I did not modify the author's code. I just converted my own data to the pickle file format by referring to the author's read_data function. This conversion will not change the IMU value. |
@rendoudoudou Thankyou, thankyou |
The initial parameters are indeed different in these two places. In my understanding, the initial parameters in utils_numpy_filter.py have no effect, you can ignore it, because the initial parameters in main_kitti.py will overwrite the initial parameters in utils_numpy_filter.py. |
@rendoudoudou Thankyou so much for ur insites it will be very helpful If i have futher questing i will mail you. can you please send me a hi msg to [email protected] this email id. so that in futher if i get any doubts i can approch you. Thankyou. |
Hello all and @mbrossar,
intially i deleted the iekfnets.p file in temp and set the training the model to 1 i.e..
read_data = 0
train_filter = 1
test_filter = 0
results_filter = 0, and the code save function saved the randomly intialize weights of the model.I didnt train the model even for a single epoch and saved state_dict()(already implemented in code).
later,
When i tested the randowm weights model by keeping test_filter=1 and results_filter = 1, I obtained the same results as published in the paper. how is that possible?
without training how can anyone get the state of art results as in paper. in paper it was trained upto 400 epochs(mentioned in code). and later i also trained the model for 400 epochs to cross verify there are no changes in the results. with training and without training there are same results.
i request anyone to explain me in detail or i have understood anything wrong in code?
The text was updated successfully, but these errors were encountered: