-
Notifications
You must be signed in to change notification settings - Fork 388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while accessing Cloudbeaver #3021
Comments
Hi @andres-chavez-bi You can use environment variables. It will also skip initial server configuration.
|
Hi @EvgeniaBzzz my goal is to have a cloudbeaver server running in an Openshift environment. I am currently testing with a predefined admin user, yes, at some point I'd like to test and check SAML authentication for the users (also the admins). As for now though, a predefined is fine for me. |
The Admin should be defined in a separate config, not in
Please note that the initial data will only be applied during the server's initial startup. |
Hi @EvgeniaBzzz I finally got my config correctly initialized, I've used the cloudbeaver.conf file, pointing to the self created init file for the admin credentials like you mentioned: Here's my config:
But now I'm facing a db initialization error:
here's my init db config (from the file I created):
Thank you so much for your help!! |
@andres-chavez-bi |
Hi @EvgeniaBzzz I'm using 24.2.1
I think the problem is my PV retain policy, I need to check on that, because from what I see, my Helm chart is correctly removing the PVC when uninstalling it, but when redeploying info seems to be there (as shown on the error) Would it make sense to skip initialization for those objects that are already created to be skipped (or overwritten if config has changed). Maybe it might help in scenarios where this happens, not only on my case but on all K8s and openshift deployments. Since the cloudbeaver db is on bootstrap mode, you still need to initialize it, even if objects are still there, but it's just a suggestion. I will look into it and comeback with feedback. Thanks again. |
Hi @EvgeniaBzzz so I've checked and our pv's and pvc's have a RetainPolicy set to "Delete". I also did a test: I openend the container and found that the trace file was there, I deleted it, and tried to run the "run_server.sh" script and it failed again with the same error (Duplicate columns). Also, I've tried to delete the db data file so maybe it can recreate it, but no luck there as well. I'm uploading the trace file for you to check on the error. I'm also attaching the pv is not available after the deployment is deleted (uninstalled using helm): LAstly, I've also updated to Cloudbeaver 24.2.2 and the same happens. Is there anything else we can try? Thanks! |
@andres-chavez-bi Could you, please, try to start server in a new workspace. Also you could try to delete the whole |
Hi @EvgeniaBzzz since these are tests, I am deleting the Openshift (K8s) components everytime before I deploy the application. So, data folder and all components of the application are deleted everytime. I have also deleted the .data folder entirely, as mentioned in my previous comment, and ran the run_server script and the issue is still present. I have also tried to deploy the server on a different namespace, to avoid any duplicates of any kind and the issue persists, here's the snippet of the logs on the new namespace, as you can see, it's also complaining about duplicate columns. Again, db corruption might not be possible since I delete all my components for every test I perform to avoid any duplicity of any kind. Also I've checked that my env variables are not messing with any configuration: Do you have any other suggestions? Do I need to provide more information to help find the issue? Thanks! |
Hello @EvgeniaBzzz I have tried to use SQLlite as a database, and the error is exactly the same (I'm using cloudbeaver 24.2.2)
I think the issue might be somewhere else, can you help me out, is there something I can check to see what it might be? I'm attaching the full logs again for you to review. |
We need more time to investigate your case |
Describe the bug
The server pod is running without any issues, there's no errors on the logs as to make it that anything failed. However, the login functionality is not working even when using the credentials in the config. So the pod/container starts, asks for login but then it says login failed, specifically indicating that the user has no password and that the credentials are invalid.
Here're the credentials:
Also, this is what I get when I get onto the gql console as a response, which I understand is correct since I can't login in the first place.
I'm attaching my settings and logs as files as well so you can check bootstrapping and other logs.
The text was updated successfully, but these errors were encountered: