-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Without virtualisation #266
Without virtualisation #266
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Passes functionality checks. |
Don't think the throttling is working as desired... but with this the 429 errors returns... |
Ah. I was wondering if this was just a fluke or because of list member limits. |
What are the response times from the endpoint looking like? We may need to do some query optimizations. |
I'll defer to your expertise or if you want to discuss in the discord that may help. |
I'm circling back to this, I'm debating whether raising the limit is the correct solution. I'm ok with the list counts populating after the core list information. IDK if that's good UI/UX design. Functionally, we may have no choice being the return of the 100 lists is instant. |
@thieflord06 , one possible solution would be for the count to be resolved server side. Assuming minimal overhead, could the count for each list be added within a new column in the db (alongside createdDate, name, url etc) ? It could make up part of the AccountListEntry that's currently being returned and consequently fix this issue. Is this feasible? |
We weren't doing that for lists but we were for blocks and we had to separate that out because it was slowing things down significantly. |
Gotcha, I'll give it some thought & see if I can land on an another solution |
After taking a closer look at this I think I finally understand what is going on. First of all: client side throttling can definitely work here. Second: the lodash throttle utility is not a solution to this particular kind of rate limiting where different invocations are being called with different arguments. What we need in this scenario is a request queue where each one waits in order for its turn. I'll swap out the lodash util for an appropriate one and we can take it from there. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is now all working as intended. Review for functionality once more?
Looking good! Needs number formatting. |
Oh, one last question here for @thieflord06: what is the actual underlying ratelimit enforced by the backend for the list totals (or against the entire api?) the once-per 500ms set on these lists is a bit slower than I would hope for. |
5/sec for all anon endpoints. |
Once everything loads, it looks like if you leave the tab and go back to it, it tries to re-fetch everything. Recording.2025-01-24.215251.mp4 |
I may be able to fix the re-fetching, but will either need to wait for sindresorhus/p-queue#220 to merge or I may end up publishing the fork myself |
Do you think they'll review it in a timely manner? |
It's fairly likely, but i'll probably just publish my form tomorrow afternoon if there's no activity by then |
Alright, I've pulled in my forked package and tuned things up a bit. There was still a bit of a weird issue where the first couple of lists in a page can have their initial requests cancelled and then they get re-queued at the end of the list, so items 4-100 have to load their totals before the totals load in for lists 1-3. I solved that by using our existing "poor man's virtualization" technique that's already in use on block/blockedby tabs and reusing it on lists as well. Now the initial render should only include 20 items. |
We'll take it. Looks good. |
Withheld working on the virtualisation element to get a viable working product out.
Throttled getListSize to prevent server overload.
Updated with css changes from #264.
List count appears to the right of of the list name.
resolves #255.