-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CRITICAL] [acdcli.cache.db] - Error opening database. #124
Comments
Killing all the acd_cli porcesses does however fix this, but that prevents uploads when the fuse filesystem is mounted. Which is the only way to upload whilst the rsync/cp unreliability is present. |
My 'stab in the dark' investigations tell me: this happens when doing large uploads. I assume that the fuse tries to do sync's every now and then, this fails to access the locked database. Casuing the fuse process to get into a bad state. This leaves the database locked (so ul and sync fail cant access the db) and also leaves the fuse sysetm broken too. Could we perhaps use different cache directories? This seems to be an ideal solution to me (as the ul wouldn't need to be identical to the fuse) - eventual consistency and all that! |
~~"Error opening database" You can have two separate node caches by copying your .cache directory and using The FUSE background sync can be disabled by adding |
I don't think its corruption of nodes.db, if I kill all the acd_cli processes it all starts working again on next sync. Also, Doing an ls in the mounted directory when its all broken gives 'Bad Address' if that's any help? But doing the above and remounting starts it working again. |
That's strange. An ls should call ACDFuse's readdir which does not raise an EFAULT (bad adress), but an ENOENT (no such file or directory). Can you tell me what your operating system/distribution you're using and the Python3 version? It would also be helpful if you could write up some steps on how to reproduce this reliably. |
Unfortunately, I don't have a reliable repro (it just happens sporadically - got it again today - but this time I was not doing a transfer at the same time. So, it looks to just be fuse related... Ubuntu x64 14.04LTS, Python 3.4.0 |
I'm seeing this more and more, when doing multiple things with the fuse mount at a time. Bad Address comes back when trying to 'ls' the acd-mount. This is a regular occurrence when a number of apps start scanning the directory... Any ideas, or anything I could/should debug? |
Mounting read-only might help.
It would be helpful if you could inspect the debug log, i.e. run Update: Sorry, the |
Haven't seen it since (didn't change any perms, just left it in logging mode to catch it when it happens). I'll update when I have more news! I did see it on another Linux install in azure... But didn't have debug on at that point. |
It's been a while, but I have still seen this (even with only one instance running). I didn't manage to get logs as I accidentally tripped over the power cable and lost them.... |
Here is the stack trace. The log looks good around the same time:
acd_cli.log at the same time:
Nothing at 8:24 [edit]
Ran the fuse mount at debug and also didn't see anything:
|
Is there anyway the PID for the acdcli upload task can be killed automatically when the upload has been completed, or has been stopped? |
I have this issue happening from time to time. This always solves the problem, but I do not know yet how to catch those specific exceptions in bash to do this automatically;
|
I am also experiencing this issue, but I have not had a chance to gather logs. I will do so on the next occurrence. However, since I am not a sqlite expert....is there anything I can do to help either prevent or even mitigate this issue while still being able to use both the FUSE mount in r/w mode and do the directory uploads? Until the acd_cli/encfs combination speed issues are fixed and the FUSE side becomes more stable, I still need to use both. Also, if you give me a list of things to look for when this happens, I will provide them. |
See #124 (comment). I'd also welcome it If anyone could see if the following patch changes anything. diff --git a/acdcli/cache/db.py b/acdcli/cache/db.py
index 9399711..54f31f4 100644
--- a/acdcli/cache/db.py
+++ b/acdcli/cache/db.py
@@ -59,6 +59,8 @@ class NodeCache(QueryMixin, SyncMixin):
lambda conn: conn.connection.create_function('REGEXP', 2, _regex_match))
initialized = os.path.exists(db_path)
+ self.engine.execute('PRAGMA busy_timeout = 30000;')
+
if initialized:
try:
initialized = self.engine.has_table(schema.Metadate.__tablename__) and \
Not going to happen. |
I tried using this command, but it appears to still use the default nodes.db cache
I got the same errors again. I tried to use the 'fuser' command from #143 (comment) to see what was using it, but when I checked there was nothing. |
I just took the ; out of the command and it is now using the correct cache path. I checked using the command
before running it this time. It used the default when the ; was in the command. |
Chiming in, I'm also experiencing many database locked errors -- when mounting multiple mounts or mounting and sending at the same time. |
Same issue here. It is sporadic though. Once this occurs, I need to unmount and remount it to get it to work again. |
Experiencing the same issue too |
@marcelowa Could you provide a log? |
I actually changed my configuration to support a different cache path as suggested, for now it seems to do the trick |
It actually happened anyway with a different cache path for uploading Suspicious parts
This is what I see if I am trying to access my mounted folder :
Attached the log |
Hi,
I don't do any FUSE mounting, I just run two instances at a time via Do you think using an alternative database back-end like in #312 would eliminate the issue? |
Hi,
I see this error when doing 'acd_cli sync , upload etc.'
This seems to happen when doing an acd_cli upload, but having the fuse mount open. Waiting a while doesn't fix this... Any ideas?
Steve
The text was updated successfully, but these errors were encountered: