-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't terminate process if file contents did not change #23
Comments
Well, with that optimization, we'd have bugs where for large files that were appended to not triggering new builds. Do you know of another watcher system that does this? I've not been able to find one. |
I don't think so... maybe the Bazel one, but I haven't tested it. https://github.com/bazelbuild/bazel-watcher
Ah, I was imagining the optimization would be "don't trigger new build if the source exactly matches" but if a file is large enough that it can't be checksummed efficiently, you just trigger the build every time / don't apply the optimization. |
Well, it's got the backing of all of bazel, but yeah, that's definitely better. I'd be willing to look at a patch for this, but don't have time to implement. |
If the changes that triggered the event were only to known files, and all of those files have the same hashes as when we began, we shouldn't actually reload the command. This optimization does *not* short circuit command reload if any of these conditions hold: - a changed file is larger than 20MB, e.g. changes to these files always trigger a reload. - more than 500 files changed - a directory or an unknown file was changed. Fixes jmhodges#23.
Here's a rough patch, which I believe behaves the way I expect. kevinburke@639e375 It's pretty slow though, I watch maybe 50 files on a Mac (Go files, CSS, JS, templates) and getting the digest for all of them takes 20ms. We could potentially parallelize it but that would add more complexity. If you're actually interested in this after looking at what's involved I could clean it up and try to work on the performance. |
If the changes that triggered the event were only to known files, and all of those files have the same hashes as when we began, we shouldn't actually reload the command. This optimization does *not* short circuit command reload if any of these conditions hold: - a changed file is larger than 20MB, e.g. changes to these files always trigger a reload. - more than 500 files changed - a directory or an unknown file was changed. Fixes jmhodges#23.
If the changes that triggered the event were only to known files, and all of those files have the same hashes as when we began, we shouldn't actually reload the command. This optimization does *not* short circuit command reload if any of these conditions hold: - a changed file is larger than 20MB, e.g. changes to these files always trigger a reload. - more than 500 files changed - a directory or an unknown file was changed. Fixes jmhodges#23.
I'd be up for it but yeah that's way too slow. A friend of mine got nerd-sniped and started poking at doing a concurrent version, but you'll likely have to carry it across the way. Here's a gist of that work https://gist.github.com/blinsay/41a1a775f83d9ed2b6ed241dcaf6b21e |
Thanks for the gist! It might take me a while to update the patch but this is something I am still definitely interested in |
If I open a file and save it without making any modifications, it would be neat if justrun did not terminate the process. This would help prevent against unnecessary server reloads, also because I compulsively hit
:w
when I am working with a file in vim.The implementation might be a little tricky. On Macs, here are the events that get generated when I save a file in TextEdit:
So it wouldn't be enough to e.g. check only for single events, since you'd have to wait for the second or third event to determine "the file is still there and didn't change". You could check right after you get the tick.C event if you have the delay turned on.
It would also be slow if a file was particularly large, though you could decide to bail on the "is this actually the same file" strategy after like 4MB or whatever.
See also fsnotify/fsnotify#232, but I'm not sure that's the right place for it.
The text was updated successfully, but these errors were encountered: