-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance degradation on huge repositories #27
Comments
Hi @bozaro For the same repository
Also, while running with 0.2.5, writing of the objects seems to slow down quite a bit after a few of them are written. While the process was outputting things like
I took a few file counts on the target directory:
Regarding memory compsumption, at about the same time,
And during this stage I had only 1 CPU core being used at 100% by the conversion essentially in user time. During the whole process I noticed no significant CPU wait time; it was mostly spent on user (and a bit on sys). The filesystem was Btrfs on a SSD disk. |
How to pack files per every 10 000 objects ? I tried to convert a big repo with lfs-test-server, but after convert (about 4 days), lfs-test-server only has file name meta, no files appears inside its folder (I've tried with same setup and small repo and it's success), it's too slow to debug. |
Between 0.2.4 and 0.2.5 the commit most likely to impact performance was 974270d; which was aimed at reducing memory consumption. Looks like it dropped a DAG library in favour of local implementation of commit graph tracking. |
While trying to convert a 3.6Gb repo to LFS, I noticed a dramatic slowdown at around 1289037/1371200 objects. It might have been slowing down before that… but I see the following:
and
and
So from around 600-700 objects per second to about 5. After leaving this running for a while it seems to slow even further to about 1 a second:
Running the visualVM sampler over it, I see the following percentages:
So 96.8% of the time is in parsing the revision, and it becomes really slow at a certain point. Hope that this information helps. |
Are there any ideas how to fix the problem? |
I am having this problem in 2022 with the latest version. It is using ~1 out of 32 cores, ~1GB RAM out of 64GB available, and <10%disk I/O. And the import has been running for 13 days so far. :D Is there a workaround for forcing it to be more parallel and/or use more RAM? |
This repository hasn't changed since 2016, it's a miracle it still works! It has been 5 years, I'm no longer certain which I did at the time, but have a vague recollection that the commit reverted cleanly! |
@leth Thanks for attempting to help. This seems to be a part of the current official release of Git (I have git-lfs/3.0.2 (GitHub; windows amd64; go 1.17.2)), what would be the way of getting this change reverted in a new release? |
Sorry, I have a contributor badge here because my PR was merged once, I have no permissions on this project! It sounds like you'd need to find the project you downloaded git/github/git-lfs from and let them know they're bundling an unmaintained tool with known bugs 🤷🏻♂️ |
There are performance degradation on huge repositories (>250 000 objects).
Look like root cause is too big file count. Much better generate pack files per every 10 000 objects.
The text was updated successfully, but these errors were encountered: