-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide automated deployment of Azure resources used in end-to-end tests #77
Comments
For Promitor I have an azure infrastructure repo to which contributors can PR new resources required for automated testing and is automatically deployed with GitHub Actions. |
Is for all infra including AKS or only for resources like EventHub, ServiceBus, etc? |
That's up to us to decide, we can just start with the Azure resources without the cluster if you prefer |
My only concert is the time to create/delete and AKS cluster, if we need to do it, we will make test even longer |
I'd start with upstreams |
|
sadly, they don't support queues and other resources we need for the moment 😢 |
This would not run every test run; only when there are changes to the infrastructure definition |
aaaah, your idea is having the infra there all the time, and update it on the fly only when needed. I thought you meant deploying/destroying it during the tests |
Yes, correct. Doing the latter is more intensive and harder to get right. I think we can avoid that as we don't have the capacity for it. |
For this scenario you were right, we can use terraform and manage all the infra from the same place. I can start with this during the week if we agree to use terraform for all (I don't know about biceps sorry xD) I'd create a repo to manage the infra, something like 'keda-infrastructure' or jus 'infrastructure'. Wdyt? |
Bicep works fine but if you want to use this cross cloud then Terraform is OK. I'd introduce |
I have expertise with terraform, so I can create the scaffolding and the initial infrastructure, that's not a problem. I'm thinking in what infra we have, and IDK if we need to cover AWS now because we create the infra during the e2e test and we delete them after it, so maybe we can go with biceps, but GCP has infra I need to review to check if we should cover it. I said terraform because it's a single language to manage all the infra, so it's easier for people who doesn't know cloud provider specific language. There is also a bot for terraform that we could use to improve the experience, giving the plan outputs and other stuff https://github.com/runatlantis/atlantis |
Let's use Terraform in that case, we don't want to do a migration later on |
I have checked and we can update the secrets by Secrets API, so we can get the terraform outputs and update the secrets directly in the org so they can be automatically managed, on every terraform executions, secrets. |
In theory, it's just going to spin up new resources and a manual action for secrets is fine IMO; at least for starters. I don't want that process to mess up our GH secrets :) |
The problem here is that secrets should be taken from somewhere in order to put them as secrets. If we go to the cloud provider and take from there, we still need access to Azure Subscription, so the blocker should be there. I have checked and there is a azure key vault integration for GH Actions, so we could put all the secrets from terraform directly in the vault and get them in the workload, but in that case, I prefer to use GH Secrets |
BTW, We can name them as |
FYI - opened a ticket with CNCF for access (owner) to an Azure Subscription so we could run these kind of automated workloads where we want. My thinking is we could start small (just spinning up Azure Event Hubs / E2E tests) and start to move more of the workloads over time as we want https://cncfservicedesk.atlassian.net/servicedesk/customer/portal/1/CNCFSD-1422 |
you are right. for the moment, I'll start creating the scaffolding with a simple resource but with all the elements ready (terraform code/modules with a backend, secret management, docs, etc) and then we can move the services one by one. To start I have my MVP subscription and once the scaffolding is ready, we can change the SP and use other account for this (MSFT or CNCF account, not to worry). |
@jeffhollan I can already tell you that they will not be able to help you :) I already looked in to this. Please don't introduce yet-another subscription @JorTurFer and just use the existing one :) |
Okey, |
I'm naively going through the motions to see where this ends cncf/credits#23 |
I have one question here, are we going to make public the infra repo or it'll be only internal? |
Yes, it should be public so that every contributor can open a PR imo |
I think this is already done as we have moved the infrastructure management to https://github.com/kedacore/testing-infrastructure and it's already public, so any contributor can just open a PR there to create needed resources on Azure but also AWS and GCP (GCP is still in progress) |
Job well done, thanks! 🎉 Can we add this new addition to the contribution guide please? |
The e2e readme in keda has a section about e2e infrastructure, and that repo has a readme with a brief description |
I have created an issue in test-tools repo to add documentation there because we don't have any guide or help |
Thanks a ton! I've noticed that contribution guide has link to test folder as well so we're good to go; thanks! |
Provide automated deployment of Azure resources used in end-to-end tests with Bicep so that things are automated and I'm not the bottleneck (or at least less).
This is because our Azure subscription is not accessible to everyone and should be just a PR away.
The text was updated successfully, but these errors were encountered: