-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTPS/TLS/SSL provider ingress support #3
Comments
And then, the users may specify which services need TLS, via |
Thanks @arno01 - this is something we've been meaning to get to for a while but haven't had a chance to. This helps a lot. |
Would also find this useful |
@arno01 @boz Would a feature such as allowing the ingress controller to proxy the requests to the Pods so that we could do the TLS termination in the server such as nginx? Something that looks like: services:
website:
expose:
- port: 80
https_options:
redirect: true # Proxy requests to the Pod by ignoring the TLS termination These annotations could be used on the ingresses to achieve this. nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true" Would such an implementation work with Akash? Given how it is really limiting to use third-party solutions such as Cloudflare, having this as an intermediate solution, while cert-manager is not working on the Akash providers, would be a great enabler. IMO this should move higher in priority and not left behind. I would gladly work on this feature. Let me know what you think. 😄 |
This would be great! Last time I looked into this there were prohibitive limits on the number of domains that a single LE account could make certs for, but it looks like those constraints have been relaxed. Would love to have this be available! |
@cloud-j-luna that's awesome, I'll be glad to test your PR. :-) |
Related #2 |
Once this has green light we can start working on it as we also work on #2 (although they are different PRs, they are related and in the scope of our tasks/needs). 😄 |
PRs related to Currently only ClusterIssuer is enabled, so every deployment shares the same issuer, which by default is letsencrypt. |
Love it! This is a great milestone, we can and should look into per-deployment issuers (support custom hostnames) after this. |
I've enabled the TLS certs out-of-the-box in our sandbox provider, feel free to test it.
|
@troian tenants can have Let's Encrypt certs out-of-the-box without the need of passing their x509 certs/keys nor tokens (since no Please see Pod needs to specify On the provider side it only requires the following:
Ideally, the providers should be also adding some attribute, probably |
Is your feature request related to a problem? Please describe.
People have to use 3rd party services for terminating HTTPS (TLS/SSL), i.e. CloudFlare.
Le'ts add the HTTPS support so to make Akash more decentralized! :-)
Describe the solution you'd like
There is a cert-manager for Kubernetes which supports multiple issuers, including ACME (Let's Encrypt supported!)
So it'd be cool if Akash could support that!
All that it would need is to support setting the correct annotation to the "Ingress" type of K8s resource:
letsencrypt
is just an arbitrary name, it could be anything there.Cluster Issuer
can be configured by the Akash provider admin.In my case I've configured it as
letsencrypt
:And here are the instructions on how to configure a basic ACME issuer (I am using that) => https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer
The instructions are for Staging Let's Encrypt.
So to use the Production Let's Encrypt, just change
https://acme-staging-v02.api.letsencrypt.org/directory
tohttps://acme-v02.api.letsencrypt.org/directory
:-)But it's always good to test the staging one, to make sure it is working (i.e. creating the secrets with the keys there) so to not hit the LE's rate limits.
I would see the cert manager
cluster-issuer
name could be configured via Akash provider's argument, in the same way we can specify the deployment runtime as of now:The text was updated successfully, but these errors were encountered: