Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add save option on analysis #114

Merged
merged 13 commits into from
Jul 11, 2023
Merged

Add save option on analysis #114

merged 13 commits into from
Jul 11, 2023

Conversation

adamltyson
Copy link
Member

@adamltyson adamltyson commented Jun 27, 2023

Closes #111
Also closes #116

@adamltyson adamltyson marked this pull request as draft June 27, 2023 15:28
@deprecated-napari-hub-preview-bot
Copy link

deprecated-napari-hub-preview-bot bot commented Jun 27, 2023

Preview page for your plugin is ready here:
https://preview.napari-hub.org/brainglobe/brainreg-segment/114
Updated: 2023-07-11T09:30:24.997223

@adamltyson
Copy link
Member Author

This needs tests, will probably build on top of #115 to simplify things

@adamltyson
Copy link
Member Author

This PR has got slightly complicated as I needed to fix some tests before I could write new ones. I haven't split this up into two PRs @alessandrofelder, but let me know if you have any issues reviewing. What I've done is:

  • Added the "save" checkbox so that users can save their segmentation at the same time as analysis
  • Addressed a stupid bug where some tests pass because the "correct" data for regression tests is copied into the test directory 😒
  • Fixed some other tests that were now failing after the fix above (failing because the tests had not been updated when functionality changed)
  • Finally added some simple tests for the new functionality addressing Add save option on analysis #114

@adamltyson
Copy link
Member Author

There are some remaining test failures with napari 0.4.18 so I will look into those

@adamltyson adamltyson marked this pull request as ready for review July 10, 2023 17:24
@adamltyson
Copy link
Member Author

@alessandrofelder this is ready for review. TBH I'm a bit bored of this PR. It was meant to take 10 minutes, but I started it two weeks ago.

I've had to add some sleep(3) calls into the tests. I don't like this, but I can't seem to find a way to make them pass without it.

Interested in your thoughts.

@alessandrofelder
Copy link
Member

As pointed out, this is a hacky solution, and carries the risk that on slower machines (don't know how slow they would have to be) 3 seconds is not enough to save the data and the test will fail there.

In the interest of adding the additional saving button and not losing (more) momentum, I think we should still merge. Apart from the sleep hack, the changes look good to me.

I have two further related comments that I can make into issues if you agree @adamltyson lmk

a possible long-term solution

I think a better long-term solution would be to:

  • mock the brainreg_segment.segment.run_save() function in these tests, and check that it's called with the expected parameters
  • write unit tests for run_save() and/or export_all()
    • tests for run_save could again mock export_all to avoid those tests being non-deterministic due to multiprocessing
    • tests for export_all could mock the thread_worker decorator to render it trivial and have export_all run in the same thread. (we trust thread_worker is tested sufficiently in the third party library it comes from and so don't need to test it)

This would ensure cleaner separation between

  • the tests that check that clicking GUI buttons call the expected functionality
  • the tests that call multiprocessing functionality
  • the tests that check the multiprocessing functionality itself

naming of testing modules

As a side note, some of the naming in the tests is a bit confusing:

  • we have a tests/test_integration/test_tracks folder that contains test_tracks_analysis.py
  • we also have a tests/test_integration/test_gui/test_tracks.py

Can we improve the naming somehow?

@adamltyson
Copy link
Member Author

Thanks @alessandrofelder. I've renamed the tests slightly, and raised #120

@adamltyson
Copy link
Member Author

As pointed out, this is a hacky solution, and carries the risk that on slower machines (don't know how slow they would have to be) 3 seconds is not enough to save the data and the test will fail there.

Yup, just had to increase this. I'm ok with the tests taking an extra 30s for now, but this is not sustainable.

@adamltyson adamltyson merged commit 4b95042 into main Jul 11, 2023
@adamltyson adamltyson deleted the save branch July 11, 2023 09:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Possible bug in some tests Add a "save" checkbox to autosave during analysis
2 participants