-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebDAV directory listing is too slow #561
Comments
I tried the same operation with fusion s3fs connected as local folder, the result are more worst, about 57 minutes for get the listing. I found project sftpgo, which support s3 and wabdav (written on go), and for this folder result - 1 second:
I believe you can compare they're webdav implementation and yours and improve your version. PS Environment is the same. |
Hi @archekb Thanks for your reports. That said, the DAV endpoint may need some performance optimisation, as we applied to the main API used by the frontend (heavy use of caching). In between we would recommend using the cells-client if you want to interact programmatically with the server (uses the same APIs as frontend) -c |
Hi, @cdujeu , thanks for your reply.
In all cases I used curl for getting listing, the conditions was equal s3 store and get listing for folder with 7500 files and we are talking about one operation aka one request
I believe you can optimize current mechanism, for webdav, and make 38 requests to function which prepare listing for html frontend (I believe it do the same And this is real problem, when curl can wait 25 minutes for listing, but other webdav clients can't wait 25 minutes for listing, usually it's timed out after 600 sec, maybe earlier.
No, my bad, I do not describe it clearly, but it's still S3-compatible iDrive E2 storage. "sftpgo" prepare listing for folder with 7500 files from s3 storage less than 1 sec. Even if it were a local folder, Cells do the same for 2 min 40 seconds seconds, it is 160x times slower. But actually it was s3 storage, and Cells was for 1500x times slower.
I compare two application which allows me to get access to my s3 storage via webdav. I believe it's reasonable compare.
Thanks, but my task is to take a backup the phone, customers, many times ask you implement this simple feature, which allows automatically sync files from phone with cloud by schedule here, here, here and here. But who's care? That's why I should use third party software and webdav for sync my files with cloud. |
Correct config NginX proxy for WebDAV:
This mandatory, if you don't wanna have a randoms files rejects and errors in NginX logs:
@cdujeu you can add it in to NginX proxy doc section, because I spend a few days for |
hi @archekb we'll dig deeper for the performances - it's probable that the current implementation (based on golang standard dav library that wraps our internal datasources are virtual filesystem) does make unnecessary operations (like OpenFile when it's all about getting stats). |
📝 Describe the bug
WebDAV directory file listing preparing too slow.
When I try to get files listing by method
PROPFIND
via URL/dav/e2-store/backup
listing for 7500 files is preparing more than 20 minutes (result file is 4MB), when in the web interface I can see all files (39 pages x 200 items per page) without any delay.First 2mb ready after 5 minutes, third one ready after more 5 minutes and last one needs about 10 minutes
Test via internet with NginX proxy:
Test on cells localhost:
I added storage from local FS and create 7500 files by script:
and tried to get listing
it more faster than s3-compatible storage, but 3 minutes it still too slow, I believe. When on web interface I can see it without any delay (get for 1 page / 200 items is 400ms via internet, for full dataset it will be about 16 sec and this is 38 requests, if it were one request, I believe it will be faster).
⚙️ How-to Reproduce
Steps to reproduce the behavior:
🩺 Environment / Setup
Complete the following information:
Server Versions:
Client used for testing:
Additional context:
The text was updated successfully, but these errors were encountered: