Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Without virtualisation #266

Merged
merged 14 commits into from
Jan 26, 2025

Conversation

tb38r
Copy link
Contributor

@tb38r tb38r commented Jan 14, 2025

Withheld working on the virtualisation element to get a viable working product out.
Throttled getListSize to prevent server overload.
Updated with css changes from #264.
List count appears to the right of of the list name.
resolves #255.

Screenshot 2025-01-14 at 17 38 38

Copy link

vercel bot commented Jan 14, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
clearsky-ui ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 26, 2025 8:32pm

@thieflord06
Copy link
Member

Passes functionality checks.

@tb38r
Copy link
Contributor Author

tb38r commented Jan 14, 2025

Don't think the throttling is working as desired...
The count represents the count of the first list item (returns a 100 items at a time).
Hence why you get the same count for a bunch of items till you scroll with a new batch of items (all using the first's count)
Moving the throttling directly to the queryFn prop returns seemingly accurate data

Screenshot 2025-01-14 at 19 04 58

but with this the 429 errors returns...
needs further investigation

@thieflord06
Copy link
Member

Ah. I was wondering if this was just a fluke or because of list member limits.

@thieflord06
Copy link
Member

What are the response times from the endpoint looking like? We may need to do some query optimizations.

@tb38r
Copy link
Contributor Author

tb38r commented Jan 16, 2025

response time via postman

Screenshot 2025-01-16 at 16 37 56

with about a 100ms variance

~181ms via the network tab on dev tools

Thinking about the problem, throttling client side wouldn't work as we're iterating through the AccountListEntry array and calling the hook at a rate of knots independently from each other. Perhaps some b/e rate limiting or something else entirely

@thieflord06
Copy link
Member

I'll defer to your expertise or if you want to discuss in the discord that may help.

@thieflord06 thieflord06 linked an issue Jan 23, 2025 that may be closed by this pull request
@thieflord06
Copy link
Member

thieflord06 commented Jan 23, 2025

I'm circling back to this, I'm debating whether raising the limit is the correct solution. I'm ok with the list counts populating after the core list information. IDK if that's good UI/UX design.

Functionally, we may have no choice being the return of the 100 lists is instant.

@tb38r

@tb38r
Copy link
Contributor Author

tb38r commented Jan 24, 2025

@thieflord06 , one possible solution would be for the count to be resolved server side. Assuming minimal overhead, could the count for each list be added within a new column in the db (alongside createdDate, name, url etc) ?

It could make up part of the AccountListEntry that's currently being returned and consequently fix this issue. Is this feasible?

@thieflord06
Copy link
Member

We weren't doing that for lists but we were for blocks and we had to separate that out because it was slowing things down significantly.

@tb38r
Copy link
Contributor Author

tb38r commented Jan 24, 2025

Gotcha, I'll give it some thought & see if I can land on an another solution

@noahm
Copy link
Collaborator

noahm commented Jan 25, 2025

After taking a closer look at this I think I finally understand what is going on. First of all: client side throttling can definitely work here. Second: the lodash throttle utility is not a solution to this particular kind of rate limiting where different invocations are being called with different arguments. What we need in this scenario is a request queue where each one waits in order for its turn. I'll swap out the lodash util for an appropriate one and we can take it from there.

Copy link
Collaborator

@noahm noahm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is now all working as intended. Review for functionality once more?

@thieflord06
Copy link
Member

Looking good! Needs number formatting.

@noahm
Copy link
Collaborator

noahm commented Jan 25, 2025

Oh, one last question here for @thieflord06: what is the actual underlying ratelimit enforced by the backend for the list totals (or against the entire api?) the once-per 500ms set on these lists is a bit slower than I would hope for.

@thieflord06
Copy link
Member

5/sec for all anon endpoints.

@thieflord06
Copy link
Member

thieflord06 commented Jan 25, 2025

Looks like, for my lists, they all loaded except for the last one.

It also looks like it's trying to retry but not showing.

It finally loaded, but it took a few times?

image

@thieflord06
Copy link
Member

thieflord06 commented Jan 25, 2025

Once everything loads, it looks like if you leave the tab and go back to it, it tries to re-fetch everything.

Recording.2025-01-24.215251.mp4

@noahm
Copy link
Collaborator

noahm commented Jan 26, 2025

I may be able to fix the re-fetching, but will either need to wait for sindresorhus/p-queue#220 to merge or I may end up publishing the fork myself

@thieflord06
Copy link
Member

Do you think they'll review it in a timely manner?

@noahm
Copy link
Collaborator

noahm commented Jan 26, 2025

It's fairly likely, but i'll probably just publish my form tomorrow afternoon if there's no activity by then

@noahm
Copy link
Collaborator

noahm commented Jan 26, 2025

Alright, I've pulled in my forked package and tuned things up a bit. There was still a bit of a weird issue where the first couple of lists in a page can have their initial requests cancelled and then they get re-queued at the end of the list, so items 4-100 have to load their totals before the totals load in for lists 1-3. I solved that by using our existing "poor man's virtualization" technique that's already in use on block/blockedby tabs and reusing it on lists as well. Now the initial render should only include 20 items.

@thieflord06
Copy link
Member

We'll take it. Looks good.

@thieflord06 thieflord06 merged commit 1deef25 into ClearskyApp06:list-counts Jan 26, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Display list count
3 participants