Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Locking and Trimming #332

Closed
JulianMar opened this issue Mar 13, 2024 · 2 comments
Closed

Locking and Trimming #332

JulianMar opened this issue Mar 13, 2024 · 2 comments
Assignees

Comments

@JulianMar
Copy link

Hello,

as I understand it, the Trimming is triggered by the Lottery and default there is a 1 in 1000 chance of triggering the Trim.
This can be quite detrimental for the Performance of the App. In our Case we have about 3M Entries in the Pulse Table, if we delete the data based on a timestamp, the delete query will take some time. In Between some new Entries want to be added to the Table, but it's still locked because of the delete Query.
If there's a lot of traffic this can escalate quite alot and cause DB Clusters to crash.

Is there a way to disable the Auto Trimming and maybe execute it at another time?
I would be open to create MR for this but I first wanted to get an idea if this Feature would be accepted.

@timacdonald
Copy link
Member

timacdonald commented Mar 15, 2024

Hi, @JulianMar, if performance is a concern in your application or you have amount of data in Pulse we recommend using the Redis ingest instead of writing directly to the database.

This means that the database trimming will happen in the pulse:work command and not during a request. It also means that inserts and deletes against the Pulse database are never happening at the same time.

Pulse will still trim the Redis stream within a request when the lottery triggers, but all if your pulse:work command is running the Redis stream is going to be empty and it will be unnoticeable to your application performance.

See: https://laravel.com/docs/11.x/pulse#performance

If you would like to keep writing directly to the database you could set the Lottery configuration to [0, 1]. This will stop the trimming within the request. You could then call Pulse::trim() in a scheduled command or elsewhere, but this doesn't necessarily fix the problem as it sounds like your tables are locking up when the trim is running and requests are writing to the DB.

I would recommend migrating to a Redis ingest, which should solve things and make your application performance even better. Hope that helps.

@JulianMar
Copy link
Author

Thanks for the amazing comment! I will check it out :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants