-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make itertomap loading more lazy #1016
base: main
Are you sure you want to change the base?
Conversation
Some tests in torchvision fail. Apparently a stream isn't being closed and my implemenation of The stream in question I think is being created here: This causes this test to fail: As I don't know how to fix it I would be happy if someone else could chime in. |
@ejguan I guess you have the most context on the errors given you fixed pytorch/vision#6997. We just merged pytorch/vision#7403 to make diagnosing this easier. This patch will hit with tomorrows nightly. Could you also have a look here? |
@pmeier Thanks, I think pytorch/vision#7403 does the right job. I will take a look at this PR to see why such issue happens with this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know the exact reason why this PR fails TorchVision. I have a strong feeling that it's because the iterator object is never properly cleaned up.
I think so as well. @pmeier Thank you for linking pytorch/vision#6997 I think I know what's happening now! Probably in the test not every value is retrieved from the map, therefore the map is never fully loaded and the How would we address this if we want lazy loading? |
I think it's the problem that the iterator which opens file handles never reaches |
606ff72
to
ce70b45
Compare
@SvenDS9 seems like the test now displays a proper error message: https://github.com/pytorch/data/actions/runs/4384715686/jobs/7676523545#step:9:2098. So the failure is still ongoing, but at least it is now clear what is happening. |
Just as a heads-up: feel free to change anything under |
I am pretty sure that is what happens. That also explains why removing the reference of iterator when it's depleted doesn't help as this code is never reached. As the iterator hasn't finished yet, the stream needs to remain open so that more elements can be retrieved from the dp. So in a way this is expected behavior. I think we should make To fix the tests in torchvision we could either:
|
Of the two options provided, I strongly favor 2. since 1. would only fix the test, but not the behavior. I'll let @ejguan comment on whether the actual proposal is the way to go here. That being said, we are currently not actively working on the datasets and thus I'm also ok with 1. to unblock. However, this means if we pick this up again in the future, we need a proper solution then. |
So, I guess we need to figure out a way to let users to indicate when they have done with Here is my third proposal, which requires changes from PyTorch, TorchData and TorchVision. In PyTorch Core, add a base callback function for all Any suggestion is welcomed! |
Fixes #454
Changes