Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: allow self-signed qBittorrent server certificates. #1753

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from

Conversation

voc0der
Copy link

@voc0der voc0der commented Dec 22, 2024


Homarr

Thank you for your contribution. Please ensure that your pull request meets the following pull request:

  • Builds without warnings or errors (pnpm buid, autofix with pnpm format:fix)
  • Pull request targets dev branch
  • Commits follow the conventional commits guideline
  • No shorthand variable names are used (eg. x, y, i or any abbrevation)

Summary

This update improves the getClient method in the QBitTorrentIntegration class to support qBittorrent servers with self-signed SSL certificates. The implementation adds an https.Agent configuration to bypass certificate validation when connecting via HTTPS.

Changes

  • Added a custom https.Agent with rejectUnauthorized: false to allow self-signed certificates.
  • Passed the https.Agent to the requestConfig parameter of the QBittorrent client.

Benefits

  • Ensures compatibility with qBittorrent servers using self-signed SSL certificates and full chain signed certificates.
  • Maintains functionality for servers running over plain HTTP.

Example Usage

The client now automatically works with:

  • HTTP Servers: Connections proceed without changes.
  • HTTPS Servers (Self-Signed): The custom https.Agent handles certificate validation bypass.

This is my first PR for this project so I haven't actually built or tested this.

I can't get the beta to work with OIDC, so this is just speculative.

@voc0der voc0der requested a review from a team as a code owner December 22, 2024 16:57
Copy link

deepsource-io bot commented Dec 22, 2024

Here's the code health analysis summary for commits d34d665..18f22ea. View details on DeepSource ↗.

Analysis Summary

AnalyzerStatusSummaryLink
DeepSource JavaScript LogoJavaScript✅ SuccessView Check ↗

💡 If you’re a repository administrator, you can configure the quality gates from the settings.

@voc0der voc0der changed the title Allow self-signed qBittorrent server certificates. fix: allow self-signed qBittorrent server certificates. Dec 22, 2024
@Meierschlumpf
Copy link
Member

Not sure if we should allow rejectUnauthorized as it increases the risk of Man in the middle attacks. What's your opinion @manuel-rw. An alternative would be to add the certificate somehow so it can check that they are the same. But maybe I'm also too cautious

@Meierschlumpf Meierschlumpf added the bug Something isn't working label Dec 22, 2024
@voc0der
Copy link
Author

voc0der commented Dec 22, 2024

An environment variable could also govern this. People could accept the risk optionally.

A full trust chain would be nice too but it's a bit more work and most people are going to have insecure certs if any at all imo. Especially given it uses the internal routes, most are going to be the self signed since it's a security implication to use a real ca.

@manuel-rw
Copy link
Member

  1. There is already an environment variable to configure this. It is undocumented. NODE_TLS_REJECT_UNAUTHORIZED. It completely disables certificate verification and should not be used
  2. Self signed certificates aren't problematic on their own, but I highly recommend you to sign one using https://letsencrypt.org/ or something similar. You'll get tons of problems down the chain in other tools too, not just Homarr.
  3. I currently have the same problem while writing feat: add proxmox integration #1752 . We should implement a global trusted certificate, that you can upload on your own, which will be passed to the agent: https://stackoverflow.com/a/78248049 This will resolve the issues with self signed certificates while not impacting security.

Copy link
Member

@manuel-rw manuel-rw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As described in my comment, I do not agree with this change as it impacts the security significantly and should not be used in production.

@voc0der
Copy link
Author

voc0der commented Dec 22, 2024

For what it's worth, I've been using NODE_TLS_REJECT_UNAUTHORIZED: 0 and it doesnt connect in the original homarr. https://github.com/ajnart/homarr/blob/ed3d143b8a24e542e533b5e94a36f2f28f3c5a89/src/server/api/routers/download.ts#L133 and that uses a similar if not the same lib.

I'm talking about securing from socket <-> socket internally. When you're hosting things, you shouldnt use your real domain cert everywhere either. That could expose your key and is also another security implication. I'm operating on the impression that this is optional, and in general, something is better than nothing. When people come to homarr in my instance, they go to a reverse proxy with a LE encryption. How homarr talks to qbitrorrent doesn't NEED to be through the reverse proxy. It could be. But then you poke ACL holes in authelia/idp. So thus, why not have the communication it will have, through the docker subnet it shares be encrypted with self signed. Since you probably also are securing that route to the reverse proxy anyways.

Same with any app. But internally, I am also implementing more of a zero trust. It would seem nice if homarr could talk to qbittorrent via that, especially considering all the arrs, and everything else does.

@manuel-rw
Copy link
Member

manuel-rw commented Dec 22, 2024

For what it's worth, I've been using NODE_TLS_REJECT_UNAUTHORIZED: 0 and it doesnt connect in the original homarr. ajnart/homarr@ed3d143/src/server/api/routers/download.ts#L133

Yes, because the library @ctrl/qbittorrent; does not use the standard fetch API. And ofetch probably does not respect this variable. Instead, we could implement our own system where you pass the public key of certificates so the API client can verify the encryption.

I'm talking about securing from socket <-> socket internally

You're talking about... what? Qbittorrent exposes a socket that a client can connect to. In this case, Homarr is the client. Certificates also apply to this connection though, similar as they do for normal HTTP.

When you're hosting things, you shouldnt use your real domain cert everywhere either. That could expose your key and is also another security implication.

Could you elaborate on this? I do not agree. You never need the private key to verify SSL encryption. Maybe https://www.cloudflare.com/learning/ssl/what-is-ssl/ and https://www.cloudflare.com/learning/ssl/what-is-asymmetric-encryption/ would be interesting reads for you.

I'm operating on the impression that this is optional, and in general, something is better than nothing.

Yes, something is indeed better than nothing. However, with untrusted certs you break many tools and you should not allow direct requests to your reverse proxy targets anyway. You should either trust this certificate on all of your clients (which will be hard, especially if you run more tools like Homarr) or use a trusted authority (like Let's Encrypt).

It would seem nice if homarr could talk to qbittorrent via that, especially considering all the arrs, and everything else does.

I'm assuming you're referencing to the Servarr apps. They will also break if you use selfsigned certs. See https://forums.sonarr.tv/t/sonarr-v3-self-signed-ssl-cert-error-when-adding-download-client/25476/2 .

I strongly recommend you create an isolated network, that does not accept any direct requests, except from your reverse proxy - regardless whether it's accessed from local network, VPN or externally. That way you also have the same certificates, access logs, access policies and proxy being used for all requests. It also simplifies certificate renewal and eliminates dangerous point of failures.

@voc0der
Copy link
Author

voc0der commented Dec 22, 2024

You're talking about... what? Qbittorrent exposes a socket that a client can connect to. In this case, Homarr is the client. Certificates also apply to this connection though, similar as they do for normal HTTP.

I'm talking qBittorrent being configured to serve over SSL. And then the reverse proxy /also/ using a proxy_pass: http_s_://qbitorrent.... it can be from a CA that you made, and signed that cert that day. It doesn't matter. Then that RP turns around and serves the client over 443 (securely) with a LE cert.

Could you elaborate on this? I do not agree. You never need the private key to verify SSL encryption. Maybe https://www.cloudflare.com/learning/ssl/what-is-ssl/ and https://www.cloudflare.com/learning/ssl/what-is-asymmetric-encryption/ would be interesting reads for you.

Alot of people are used to mounting their private.key alongside the fullchain-or-ca.pem or certificate.crt. Or maybe even mounting it. Everywhere. Kind of like docker.sock. It's not for me. If someone got a hold of these, which is not hard to imagine, MITM and historical tls decryption are problems on DMZ facing endpoints.

I have an isolated network. Like I said before, I am running Authelia as an IdP and everything is reverse proxied with a LE cert. I'm trying to add internal certificates for docker network traffic.

Let me turn the tables. Why would I not want to use HTTPS on my qBt instance and my reverse proxy? Assume they cannot access the qBt instance or whatever anyways, if my firewall isn't a joke. That qBt https instance lives in its own docker subnet (almost) alone. Nginx, Homarr have IP's. But it would be great if it talked via homarr via ssl. No ports are exposed beyond the reverse proxy. OIDC , etc, etc,. I agree that Authelia should be using a valid cert. But I dont care if qBt is. And for the reasons I stated, I prefer it not. But I prefer not to use HTTP.

@manuel-rw
Copy link
Member

I'm talking qBittorrent being configured to serve over SSL. And then the reverse proxy /also/ using a proxy_pass: http_s_://qbitorrent.... it can be from a CA that you made, and signed that cert that day. It doesn't matter.

Then is there any reason, why you'd not want Homarr to go over the proxy as well? It just seems to complicate things more.

Alot of people are used to mounting their private.key alongside the fullchain-or-ca.pem or certificate.crt. Or maybe even mounting it. Everywhere. Kind of like docker.sock.

Obviously you should not mount or give your private keys to any third person. Same applies to the Docker Socket as you said 👍

Let me turn the tables. Why would I not want to use HTTPS on my qBt instance and my reverse proxy? Assume they cannot access the qBt instance or whatever anyways, if my firewall isn't a joke. That qBt https instance lives in its own docker subnet (almost) alone. Nginx, Homarr have IP's. But it would be great if it talked via homarr via ssl. No ports are exposed beyond the reverse proxy. OIDC , etc, etc,. I agree that Authelia should be using a valid cert. But I dont care if qBt is. And for the reasons I stated, I prefer it not. But I prefer not to use HTTP.

Thanks for explaining. As mentioned, this can be easily fixed by using a trusted certificate or trusting a self-signed certificate inside Homarr (and every of your clients, including Nginx). Your solution outlined in this PR will fix this problem as well, but it also disables any certificate checks. It is still better than using plain HTTP (or any unencrypted protocol for this matter) but the integrity & safety of the connection is no longer guaranteed. I strongly vote that we allow the user to mount / upload their own certificate (mount seems safer to me, can be used in automated systems, such as Kubernetes clusters, where certificates are renewed or replaced automatically).

@voc0der
Copy link
Author

voc0der commented Dec 23, 2024

Summary of Changes

  1. Custom SSL Certificates: Added support for QBITTORRENT_SSL_CERT and QBITTORRENT_SSL_KEY environment variables to use a custom SSL certificate and key.
  2. Optional Insecure Fallback: Introduced QBITTORRENT_SSL_INSECURE (default: false) to enable fallback for self-signed certificates.
  3. Default Behavior: Avoids adding requestConfig unless SSL customization is explicitly configured.

@manuel-rw
Copy link
Member

Summary of Changes

1. **Custom SSL Certificates**: Added support for `QBITTORRENT_SSL_CERT` and `QBITTORRENT_SSL_KEY` environment variables to use a custom SSL certificate and key.

2. **Optional Insecure Fallback**: Introduced `QBITTORRENT_SSL_INSECURE` (default: `false`) to enable fallback for self-signed certificates.

3. **Default Behavior**: Avoids adding `requestConfig` unless SSL customization is explicitly configured.

Thank you so much for the change.
I apprechiate that you're taking the time to work on this.

I'm wondering whether we should create a wrapper for fetch() and apply this change there globally. Of course, this will not apply to the ctrl/* libraries, because they abstracted the fetch client as I previously mentioned. Hence I suggest the following changes:

  • Implement a new package that wraps the fetch() function. It abstracts the certificate loading process into this function, so it can be reused in multiple places. To avoid accidential usage of of the standard fetch function, we also need a custom eslint analyzer that will create a diagnostic for usages of the old fetch function (not sure how easy this is...).
  • Since we reuse the logic also for non-Qbittorrent code, rename the environment variables to reflect this change
  • Since the ctrl/* libraries do not re-use the fetch API, we could extract the code from @voc0der to another function which we re-use in all initializations of these libraries (eg. Deluge, ...).

This way, self signed certificates are handeled in a common & known way. Error handling can be centralized and we only need to write tests once for the same code. We also avoid code duplication and multiple behaviours for the same functionality.

@Meierschlumpf / @SeDemal / @voc0der what do you think?

@Meierschlumpf
Copy link
Member

I only kind of agree. IMO it makes sense to have a centralized function 👍🏼
But IMO it would be easier to use if you could add the certificate during the creation of an integration. Because then you also run the test connection which will show you that it's not trusting the self-signed certificate. This should then obviously be exposed through our api somehow? Not 100% sure how to structure it right now.

@manuel-rw
Copy link
Member

manuel-rw commented Dec 23, 2024

I only kind of agree. IMO it makes sense to have a centralized function 👍🏼 But IMO it would be easier to use if you could add the certificate during the creation of an integration. Because then you also run the test connection which will show you that it's not trusting the self-signed certificate. This should then obviously be exposed through our api somehow? Not 100% sure how to structure it right now.

Good idea, but maybe we can add this in a second iteration? Or do you have any concrete proposal how we could implement this? I agree with OP; we need some kind of solution for this problem soon-ish.

@Meierschlumpf
Copy link
Member

Meierschlumpf commented Dec 23, 2024

image
image

I would suggest something in this direction
In the db we would have a new table called sslCertificates and a new nullable field in the integrations table named sslCertificate

The red box would be advanced options or something like that

@voc0der
Copy link
Author

voc0der commented Dec 23, 2024

Would this new view accept file paths from the environment as an override?

Might I propose not storing it in the database but having the UI work similar to that? Most cert users are not manually going to want to renew.

Both my LE and Self signed chains are managed by scripts. This would be the first and only application that tries to take over my certs and manage them itself. Which, please don't take offense.. 😅 is kind of the reason using a real cert everywhere is bad towards my original point.

Maybe if the env variables exist, additional UI controls exist to verify the cert? That being said I now need to learn how to do that with the homarr codebase..

@Meierschlumpf
Copy link
Member

I think we should discuss this further before implementing it 😉

@manuel-rw
Copy link
Member

image image

I would suggest something in this direction In the db we would have a new table called sslCertificates and a new nullable field in the integrations table named sslCertificate

The red box would be advanced options or something like that

I think the problem with storing it in the database will be certificate renewals. As an example. I automatically fetch certificates in my Kubernetes cluster by referencing them in my ingress. They are usually valid for a short amount of time (e.g. 30 days) and will expire. Uploading them into a database will eventually lead to expiry. I would vote that we mount a certificate instead. We could check whether they can conflict (e.g. make requests to other sites, like google.com) fail or not. If they do, we should enable them conditionally. Otherwise, I don't see many drawbacks if we only have one global certificate.

@Meierschlumpf
Copy link
Member

You mean one specific directory where you can place all your certificates in and then it makes the fetch request to all the integrations with that as ca argument? Is okay for me as well. The only drawback I see is that the certificates has to be added at a seperate location than the creation page and they are not directly linked to the integrations that need them. But if it works other than that, sure!

@manuel-rw
Copy link
Member

manuel-rw commented Dec 23, 2024

You mean one specific directory where you can place all your certificates in and then it makes the fetch request to all the integrations with that as ca argument? Is okay for me as well. The only drawback I see is that the certificates has to be added at a seperate location than the creation page and they are not directly linked to the integrations that need them. But if it works other than that, sure!

Yes, that would be my suggestion. We could list the certificates in a dropdown on the integration create / edit page to let the user select them. That way, a certificate issuer (e.g. a script, an issuer in a cluster or just a manual file update) can update the certificate on the fly. If the certificate was deleted in the mean time, just log a warning and not use any custom certificate to verify. What do you think, @voc0der ? Would you also be willing to implement this? From my point of view, the priority for this isn't too high, so it's likely we would prioritize it for a release after 1.0.

@voc0der
Copy link
Author

voc0der commented Dec 24, 2024

I like the ideas and I think I can do it. I'll take a look at how the templating works. If I'm understanding it correctly, each app will have a .key. and .crt (filename not important) they can save the path of, and it will by default try to load that from the database.

Failing to load that file (e.g. say the CA dir is missing), it will try to load insecurely. If no certificate is selected, and they save that configuration, they ignore the SSL altogether.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants