-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage #81
Comments
This is a cache related issue. Because nothing reach the disk surface until the handle is closed or a transaction is committed. You can purge the cache by manually committing the transaction via unqlite_commit() each time you reach certain threshold (i.e per 10K insertions). |
That doesn't seem to work: even if I run
which periodically calls |
I wonder if insertions after calling If it will be kept in the previous allocated memory, then it's not much a deal, we just have to call |
Yes, you have to understand that UnQLite keep some of the freed memory in a internal pool before releasing them to the OS. We do this so that successive read, write operations does no request new memory blocks again from the underlying OS which is very costly. |
There is an error in the reference count that frees the page object in the page_unref function. |
Hi there!
I'm running a performance comparison between several databases, using the ioarena benchmarking tool. I'm running the tool with the following parameters:
What I found surprising, is that memory usage grows linearly, up to 83.6 MB, which seems quite a lot compared with other DB engines run in the same benchmark (upscaledb: 4.3 MB, sqlite: 4.0 MB, rocksdb: 27.8 MB). I wrote the unqlite driver for ioarena myself, so it might be possible the the problem is in the driver; however, running valgrind with the leak-check tool doesn't report any leaks (it appears that all the RAM is properly freed when the DB is closed).
Given that the FAQ states that the DB should also be usable in embedded devices, I wonder if such a high memory usage could be due to some bug.
The text was updated successfully, but these errors were encountered: