Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EndoSurf's training #10

Open
smoreira00 opened this issue Feb 14, 2024 · 3 comments
Open

EndoSurf's training #10

smoreira00 opened this issue Feb 14, 2024 · 3 comments

Comments

@smoreira00
Copy link

smoreira00 commented Feb 14, 2024

Before using my own datasets, I decided to train EndoSurf using one of the EndoNeRF's datasets (pulling_soft_tissues). Then, I tried to use my own private datasets using the exact same configurations used to train 'pulling_soft_tissues', but I was not able to obtain good results.

Taking this into account (and after checking that the camera poses and intrinsics were correct and that the scene was inside the unit sphere), I decided to compare the training graphics of both datasets (pulling_soft_tissues and mine). This was what I obtained for:

  • pulling_soft_tissues:
    Captura de ecrã 2024-02-14 115238
    Captura de ecrã 2024-02-14 115246

  • My private dataset:
    Captura de ecrã 2024-02-14 101623
    Captura de ecrã 2024-02-14 101630

I noticed that, in my case, every loss graphic seemed off:

  • the depth and sdf losses were always at 0
  • the color loss and loss angle have really high values
  • eikonal and surf neig losses are weird
  • etc.

Could you give me any assumptions of what may be happening? Where is the data normalized?

@yuqingping01
Copy link

Hello, how do you handle your data to make it meet the requirements?

@yuqingping01
Copy link

Excuse me, how can I reconstruct my own data set by training the model? Now I have processed my binocular endoscope video, converted the generated disparity map into tiff format, and there are also json files of camera parameters, but I found that trainer_endosurf.py reads. pkl files. Do I need to serialize the data into PKL files? What should I do? Can you give me some guidance? thank you

@Ruyi-Zha
Copy link
Owner

Hi thanks for your interest. PKL file is just a dictionary of some numpy arrays. You can modify dataloader to load json files. One thing that is important is that we assume poses are pre-normalized. Therefore it is better to slightly modify preprocess.py to process your files and convert to *.pkl format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants