-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
useOffsetInfiniteScrollQuery does not account for added/removed rows #428
Comments
Hey @roobox, I am running into the same issue. Were you able to find a workaround for this? |
Hey! Thanks for opening the issue. I understand what's going wrong, but I don't have a good fix for this. For now, I would propose to just revalidate the query. I was thinking about refactoring the pagination and infinite scroll hooks, and this is good input. Let me know if you have more! |
Hi @pikameow420 , I endet up developing my own inifiniteScroll implementation to be able to handle "more complex" cases. |
can you share it? :) |
Sure! I am using SWR from Vercel. `import useSWR from 'swr' /**
let query = supabase // Optionally filter by discountId const { data, error } = await query console.log('Fetched Books:', data) /**
let query = supabase if (discountId !== null) { const { data, error } = await query; if (error) { return data /**
// Define how many items to load on each subsequent fetch // Build a cache key that includes discountId if it exists // 1) Load the initial set of books // 2) Load more books when scrolling // Optional helper to force-refresh the entire list if (!categoryId) { return { |
Describe the bug
I am using useOffsetInfiniteScrollQuery to achieve an Inifite-Scroll effect in my web application. The data is sorted by the created_at column (descending order) and loading more data (pageSize = 10) is working perfectly, until someone inserts or deletes a row. When I insert an row to my supabase DB using a stored procedure and then insert this manually to my SWR frontend cache via useUpsertItem this item gets added at index 0 to my data array, as it has the newest created_at value. It is also at position 0 in the DB table when sorting it using the created_at column. As the the "offset" value of the SQL query stayed the same, but in the sorted DB all rows were shifted one position down, the first row of the next page is the same as the last row of the previous page (so I am fetching 2 times the same row).
To Reproduce
Setup Infinite Scroll Query:
const { data, loadMore, isValidating, error } = useOffsetInfiniteScrollQuery(
supabase
.from('books')
.select('id, name, created_at')
.order('created_at', { ascending: false }) //multiple rows can have the same created_at value, e.g. when using Batch inserts
.order('id', { ascending: false })
{ pageSize: 10 },
);
Add row to DB without this package by using a stored procedure
Insert the added row to the SWR Cache
const upsertItem = useUpsertItem({
primaryKeys: ['id'],
table: 'books',
schema: 'public'
});
upsertItem({id: '231c220d-f4d7-4ca9-8761-d1beee69cbb7', name: 'testbook', created_at: new Date().toISOString()})
Load more data
loadMore() // This is the point where my described bug occurs. The last row of the first page and the first row of the second page are identical, as the offset-value has not changed, but so did the DB table.
Expected behavior
I would expect that at some place I can "manually" adjust the offset. In my case where I add an element to the SWR cache I want to increase the Offset by one so instead of performing:
SELECT "id", "name", "created_at"
FROM "books"
ORDER BY ...
OFFSET 10
LIMIT 10
I want to perform:
SELECT "id", "name", "created_at"
FROM "books"
ORDER BY ...
OFFSET 11
LIMIT 10
And when I delete an Item via useDeleteItem I want to be able to decrease the offset by 1.
Additional context
I cant use useCursorInfiniteScrollQuery, as the created_at column does not hold unique values (i.e. batch imports) and the package would need to be extended to support additional filters and also using >= created_at instead of > created_at (see https://the-guild.dev/blog/graphql-cursor-pagination-with-postgresql#cursor-pagination-with-additional-filters). Maybe this is also worth considering.
The text was updated successfully, but these errors were encountered: