diff --git a/README.md b/README.md index a2184750..2c1047f9 100644 --- a/README.md +++ b/README.md @@ -104,25 +104,18 @@ but will also run different containers: database (h2) & vault secret engine if You can discover the [quarkus dev services](https://quarkus.io/guides/dev-services) and injected config by pressing on the key `c` within your terminal. -If you plan to play with a quarkus demo application and bind it to a service, then install a kind cluster locally -```bash -VM_IP= // e.g. VM_IP=127.0.0.1 -curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind-reg-ingress.sh" | bash -s y latest kind 0 ${VM_IP} -``` -and next follow then the instructions of the [Demo time](#demo-time) section :-) +Next follow then the instructions of the [Demo time](#demo-time) section :-) ### Using Primaza on a k8s cluster In order to use Primaza on kubernetes, it is needed first to setup a cluster (kind, minikube, etc) and to install an ingress controller. -To simplify this process, you can use the following bash script able to set up such environment using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) and [helm](https://helm.sh/docs/helm/helm_install/). - +You can use the following script able to install using kind a kubernetes cluster locally: ```bash -VM_IP= -curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind-reg-ingress.sh" | bash -s y latest kind 0 ${VM_IP} +curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install ``` -**Remark**: The kubernetes's version can be changed if you replace `latest` with one of the version supported by kind `1.23 .. 1.25` +> **Remark**: To see all the options proposed by the script, use the command `curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s -h` -Install vault using the following script `./scripts/vault.sh`. We recommend to use this script as it is needed to perform different steps +If the cluster is up and running, install vault using the following script `./scripts/vault.sh`. We recommend to use this script as it is needed to perform different steps post vault installation such as: - unseal, - store root token within the local folder `.vault/cluster-keys.json`, @@ -133,7 +126,13 @@ post vault installation such as: > **Note**: If creation of the vault's pod is taking more than 60s as the container image must be downloaded, then the process will stop. In this case, remove the helm chart `./scripts/vault.sh remove` and repeat the operation. -> **Tip**: Notice the messages displayed within the console as they told you how to get the root token and where they are stored, where to access the keys, etc ! +> **Tip**: Notice the messages displayed within the terminal as they told you how to get the root token and where they are stored, where to access the keys, etc ! + +We can now install Crossplane and its Helm provider +```bash +./scripts/crossplane.sh +``` +> **Tip**: Script usage is available using the `-h` parameter Create the primaza namespace ```bash @@ -151,7 +150,7 @@ helm install \ primaza-app \ primaza-app \ -n primaza \ - --set app.image=quay.io/halkyonio/primaza-app:latest \ + --set app.image=//primaza-app:latest \ --set app.host=primaza.${VM_IP}.nip.io \ --set app.envs.vault.url=${VAULT_URL} ``` @@ -160,12 +159,15 @@ helm install \ If you prefer to install everything all-in-one, use our bash scripts on a `kind` k8s cluster: ```bash VM_IP= -VAULT_URL=http://vault-internal.vault:8200 +export VAULT_URL=http://vault-internal.vault:8200 +export PRIMAZA_IMAGE_NAME=kind-registry:5000/local/primaza-app $(pwd)/scripts/vault.sh -$(pwd)/scripts/primaza.sh +$(pwd)/scripts/crossplane.sh +$(pwd)/scripts/primaza.sh build +$(pwd)/scripts/primaza.sh localdeploy ``` -> **Note**: Before to execute the `./primaza.sh` script, check the latest image pushed on quay.io as set the version to the one you want to test using the variable `export GIT_SHA_COMMIT=` ! +> **Note**: If you prefer to use the helm chart pushed on [Halkyon repository](https://github.com/halkyonio/helm-charts), don't use the parameters `build` and `localdeploy` And now, you can demo it ;-) @@ -188,29 +190,14 @@ To play with Primaza, you can use the following scenario: Everything is in place to claim a Service using the following commands: -- Install the `fruits` postgresql DB that the Quarkus Fruits application will access - ```bash - DB_USERNAME=healthy - DB_PASSWORD=healthy - DB_DATABASE=fruits_database - RELEASE_NAME=postgresql - VERSION=11.9.13 - helm uninstall postgresql -n db - kubectl delete pvc -lapp.kubernetes.io/name=$RELEASE_NAME -n db - - helm install $RELEASE_NAME bitnami/postgresql \ - --version $VERSION \ - --set auth.username=$DB_USERNAME \ - --set auth.password=$DB_PASSWORD \ - --set auth.database=$DB_DATABASE \ - --create-namespace \ - -n db - ``` - Deploy the Quarkus Fruits application within the namespace `app` ```bash - kubectl create ns app - kubectl delete -f $(pwd)/scripts/data/atomic-fruits.yml - kubectl apply -f $(pwd)/scripts/data/atomic-fruits.yml + helm install fruits-app halkyonio/fruits-app \ + -n app --create-namespace \ + --set app.image=quay.io/halkyonio/atomic-fruits:latest \ + --set app.host=atomic-fruits..nip.io \ + --set app.serviceBinding.enabled=false \ + --set db.enabled=false ``` - Create an entry within the secret store engine at the path `primaza/fruits`. This path will be used to configure the credentials to access the `fruits_database`. ```bash @@ -222,8 +209,8 @@ Everything is in place to claim a Service using the following commands: export VAULT_TOKEN=root export VAULT_ADDR=http://localhost: - // Next create a key - vault kv put -mount=secret primaza/fruits healthy=healthy + // Next create the key that we need to access the Postgresql fruits db + vault kv put -mount=secret primaza/fruits username=healthy password=healthy database=fruits_database vault kv get -mount=secret primaza/fruits ``` @@ -237,7 +224,7 @@ Everything is in place to claim a Service using the following commands: // To be executed when steps are done manually or when using quarkus:dev export KIND_URL=$(kubectl config view -o json | jq -r --arg ctx kind-kind '.clusters[] | select(.name == $ctx) | .cluster.server') - $(pwd)/scripts/data/cluster.sh + $(pwd)/scripts/data/cluster.sh // Common steps $(pwd)/scripts/data/services.sh diff --git a/app/pom.xml b/app/pom.xml index 8ead5be8..82bdd546 100644 --- a/app/pom.xml +++ b/app/pom.xml @@ -224,12 +224,15 @@ true + https://raw.githubusercontent.com/primaza/primaza/main/config/crd/bases/primaza.io_clusterenvironments.yaml https://raw.githubusercontent.com/primaza/primaza/main/config/crd/bases/primaza.io_registeredservices.yaml https://raw.githubusercontent.com/primaza/primaza/main/config/crd/bases/primaza.io_servicebindings.yaml https://raw.githubusercontent.com/primaza/primaza/main/config/crd/bases/primaza.io_servicecatalogs.yaml https://raw.githubusercontent.com/primaza/primaza/main/config/crd/bases/primaza.io_serviceclaims.yaml https://raw.githubusercontent.com/primaza/primaza/main/config/crd/bases/primaza.io_serviceclasses.yaml + + https://raw.githubusercontent.com/crossplane-contrib/provider-helm/master/package/crds/helm.crossplane.io_releases.yaml diff --git a/app/src/main/java/io/halkyon/Templates.java b/app/src/main/java/io/halkyon/Templates.java index 708c0093..28526d23 100644 --- a/app/src/main/java/io/halkyon/Templates.java +++ b/app/src/main/java/io/halkyon/Templates.java @@ -3,11 +3,7 @@ import java.util.List; import java.util.Map; -import io.halkyon.model.Application; -import io.halkyon.model.Claim; -import io.halkyon.model.Cluster; -import io.halkyon.model.Credential; -import io.halkyon.model.Service; +import io.halkyon.model.*; import io.quarkus.qute.CheckedTemplate; import io.quarkus.qute.TemplateInstance; @@ -40,9 +36,10 @@ public static native TemplateInstance list(String title, List services, public static native TemplateInstance form(String title, Service service); - public static native TemplateInstance listDiscovered(String title, List services, long items); + public static native TemplateInstance listDiscovered(String title, List services, + long items); - public static native TemplateInstance listDiscoveredTable(List services, long items); + public static native TemplateInstance listDiscoveredTable(List services, long items); } @CheckedTemplate(basePath = "credentials", requireTypeSafeExpressions = false) diff --git a/app/src/main/java/io/halkyon/model/Claim.java b/app/src/main/java/io/halkyon/model/Claim.java index 99ff2678..9244506e 100644 --- a/app/src/main/java/io/halkyon/model/Claim.java +++ b/app/src/main/java/io/halkyon/model/Claim.java @@ -1,5 +1,6 @@ package io.halkyon.model; +import java.util.Arrays; import java.util.Collections; import java.util.Date; import java.util.List; @@ -60,6 +61,9 @@ public static List listAll() { } public static List listAvailable() { - return find("status=:status", Collections.singletonMap("status", ClaimStatus.BINDABLE.toString())).list(); + // TODO: To be reviewed to support to display claims when status is pending or bindable + // return find("status=:status", Collections.singletonMap("status", ClaimStatus.BINDABLE.toString())).list(); + return find("status in :statuses", Collections.singletonMap("statuses", + Arrays.asList(ClaimStatus.PENDING.toString(), ClaimStatus.BINDABLE.toString()))).list(); } } diff --git a/app/src/main/java/io/halkyon/model/Service.java b/app/src/main/java/io/halkyon/model/Service.java index e87454cc..adf0930e 100644 --- a/app/src/main/java/io/halkyon/model/Service.java +++ b/app/src/main/java/io/halkyon/model/Service.java @@ -47,6 +47,10 @@ public class Service extends PanacheEntityBase { */ public String externalEndpoint; public Boolean available; + public Boolean installable; + public String helmRepo; + public String helmChart; + public String helmChartVersion; @CreationTimestamp public Date created; @UpdateTimestamp @@ -98,6 +102,12 @@ public static List listAll() { } public static List findAvailableServices() { + // TODO. This code should be reviewed as currently we check if a Service + // part of the catalog as the property available = true + // instead of checking if a service is running within the cluster(s). + // This service must check using the cache, the available services + // old code --> + // return Service.findAll(Sort.ascending("name")).list(); return Service.find("available=true").list(); } } diff --git a/app/src/main/java/io/halkyon/model/ServiceDiscovered.java b/app/src/main/java/io/halkyon/model/ServiceDiscovered.java new file mode 100644 index 00000000..35e71cb6 --- /dev/null +++ b/app/src/main/java/io/halkyon/model/ServiceDiscovered.java @@ -0,0 +1,8 @@ +package io.halkyon.model; + +public class ServiceDiscovered { + public String namespace; + public String clusterName; + public String kubernetesSvcName; + public Service serviceIdentity; +} diff --git a/app/src/main/java/io/halkyon/resource/page/ApplicationResource.java b/app/src/main/java/io/halkyon/resource/page/ApplicationResource.java index 8651553a..1ed978a9 100644 --- a/app/src/main/java/io/halkyon/resource/page/ApplicationResource.java +++ b/app/src/main/java/io/halkyon/resource/page/ApplicationResource.java @@ -103,11 +103,25 @@ public Response doClaimApplication(@PathParam("id") long applicationId, @FormPar if (claim.service == null) { throw new NotAcceptableException(String.format("Claim %s has no services available", claimId)); } + if (claim.service.installable) { + try { + System.out.println("Service is installable using crossplane. Let's do it :-)"); + bindService.createCrossplaneHelmRelease(application.cluster, claim.service); + } catch (ClusterConnectException ex) { + throw new InternalServerErrorException( + "Can't deploy the service with the cluster " + ex.getCluster() + ". Cause: " + ex.getMessage()); + } + } if (claim.service.credentials == null || claim.service.credentials.isEmpty()) { throw new NotAcceptableException(String.format("Service %s has no credentials", claim.service.name)); } claim.application = application; try { + // TODO: Do a temporary workaround and hard code the values :-( + claim.service.cluster = claim.application.cluster; + claim.service.name = "postgresql"; + claim.service.namespace = "db"; + claim.persist(); bindService.bindApplication(claim); claim.persist(); return Response.ok().build(); diff --git a/app/src/main/java/io/halkyon/resource/page/ClaimResource.java b/app/src/main/java/io/halkyon/resource/page/ClaimResource.java index b5f55dbc..a083262c 100644 --- a/app/src/main/java/io/halkyon/resource/page/ClaimResource.java +++ b/app/src/main/java/io/halkyon/resource/page/ClaimResource.java @@ -36,6 +36,7 @@ import io.halkyon.resource.requests.ClaimRequest; import io.halkyon.services.BindApplicationService; import io.halkyon.services.ClaimStatus; +import io.halkyon.services.KubernetesClientService; import io.halkyon.services.UpdateClaimJob; import io.halkyon.utils.AcceptedResponseBuilder; import io.halkyon.utils.FilterableQueryBuilder; @@ -51,6 +52,9 @@ public class ClaimResource { private final UpdateClaimJob claimingService; private final BindApplicationService bindService; + @Inject + KubernetesClientService kubernetesClientService; + @Inject public ClaimResource(Validator validator, UpdateClaimJob claimingService, BindApplicationService bindService) { this.validator = validator; @@ -210,8 +214,35 @@ private void doUpdateClaim(Claim claim, ClaimRequest request) { claimingService.updateClaim(claim); + // TODO: Logic to be reviewed + if (claim.service.installable != null && claim.service.installable && claim.application != null) { + try { + System.out.println("Service is installable using crossplane. Let's do it :-)"); + bindService.createCrossplaneHelmRelease(claim.application.cluster, claim.service); + if (kubernetesClientService.getServiceInCluster(claim.application.cluster, claim.service.getProtocol(), + claim.service.getPort()).isPresent()) { + claim.service.cluster = claim.application.cluster; + } + } catch (ClusterConnectException ex) { + throw new InternalServerErrorException( + "Can't deploy the service with the cluster " + ex.getCluster() + ". Cause: " + ex.getMessage()); + } + } + + // TODO: We must find the new service created (= name & namespace + port), otherwise the url returned by + // generateUrlByClaimService(claim) will be null + LOG.infof("Service name: %s", claim.service.name == null ? "" : claim.service.name); + LOG.infof("Service namespace: %s", claim.service.namespace == null ? "" : claim.service.namespace); + LOG.infof("Service port: %s", claim.service.getPort() == null ? "" : claim.service.getPort()); + LOG.infof("Service protocol: %s", claim.service.getProtocol() == null ? "" : claim.service.getProtocol()); + if (claim.service != null && claim.service.credentials != null && claim.application != null) { try { + // TODO: Do a temporary workaround and hard code the values :-( + claim.service.cluster = claim.application.cluster; + claim.service.name = "postgresql"; + claim.service.namespace = "db"; + claim.persist(); bindService.bindApplication(claim); } catch (ClusterConnectException e) { LOG.error("Could bind application because there was connection errors. Cause: " + e.getMessage()); diff --git a/app/src/main/java/io/halkyon/resource/page/ServiceResource.java b/app/src/main/java/io/halkyon/resource/page/ServiceResource.java index e389063a..a236b8a7 100644 --- a/app/src/main/java/io/halkyon/resource/page/ServiceResource.java +++ b/app/src/main/java/io/halkyon/resource/page/ServiceResource.java @@ -27,8 +27,11 @@ import org.jboss.resteasy.annotations.Form; import io.halkyon.Templates; +import io.halkyon.exceptions.ClusterConnectException; import io.halkyon.model.Service; +import io.halkyon.model.ServiceDiscovered; import io.halkyon.resource.requests.ServiceRequest; +import io.halkyon.services.KubernetesClientService; import io.halkyon.services.ServiceDiscoveryJob; import io.halkyon.utils.AcceptedResponseBuilder; import io.halkyon.utils.FilterableQueryBuilder; @@ -44,6 +47,9 @@ public class ServiceResource { @Inject ServiceDiscoveryJob serviceDiscoveryJob; + @Inject + KubernetesClientService kubernetesClientService; + @GET @Path("/new") @Produces(MediaType.TEXT_HTML) @@ -190,18 +196,20 @@ public io.halkyon.model.Service findByNameAndVersion(@PathParam("name") String n @Produces(MediaType.TEXT_HTML) @Consumes(MediaType.APPLICATION_JSON) @Path("/discovered") - public TemplateInstance listDiscoveredServices() { - List discoveredServices = Service.findAvailableServices(); - return Templates.Services.listDiscovered("Services available", discoveredServices, discoveredServices.size()); + public TemplateInstance listDiscoveredServices() throws ClusterConnectException { + List servicesDiscovered = kubernetesClientService.discoverServicesInCluster(); + return Templates.Services.listDiscovered("Services available", servicesDiscovered, servicesDiscovered.size()); + // List services = Service.findAvailableServices(); + // return Templates.Services.listDiscoveredTable(services, services.size()); } @GET @Produces(MediaType.TEXT_HTML) @Consumes(MediaType.APPLICATION_JSON) @Path("/discovered/polling") - public TemplateInstance pollingDiscoveredServices() { - List discoveredServices = Service.findAvailableServices(); - return Templates.Services.listDiscoveredTable(discoveredServices, discoveredServices.size()); + public TemplateInstance pollingDiscoveredServices() throws ClusterConnectException { + List servicesDiscovered = kubernetesClientService.discoverServicesInCluster(); + return Templates.Services.listDiscovered("Services available", servicesDiscovered, servicesDiscovered.size()); } private void doUpdateService(Service service, ServiceRequest request) { @@ -210,6 +218,14 @@ private void doUpdateService(Service service, ServiceRequest request) { service.type = request.type; service.endpoint = request.endpoint; service.externalEndpoint = request.externalEndpoint; + if (request.installable != null && request.installable.equals("on")) { + service.installable = true; + } else { + service.installable = false; + } + service.helmRepo = request.helmRepo; + service.helmChart = request.helmChart; + service.helmChartVersion = request.helmChartVersion; if (StringUtils.isNotEmpty(service.externalEndpoint)) { service.available = true; diff --git a/app/src/main/java/io/halkyon/resource/requests/ServiceRequest.java b/app/src/main/java/io/halkyon/resource/requests/ServiceRequest.java index 961bf1a2..0f9a9405 100644 --- a/app/src/main/java/io/halkyon/resource/requests/ServiceRequest.java +++ b/app/src/main/java/io/halkyon/resource/requests/ServiceRequest.java @@ -23,4 +23,12 @@ public class ServiceRequest { public String endpoint; @FormParam public String externalEndpoint; + @FormParam + public String installable; + @FormParam + public String helmRepo; + @FormParam + public String helmChart; + @FormParam + public String helmChartVersion; } diff --git a/app/src/main/java/io/halkyon/services/BindApplicationService.java b/app/src/main/java/io/halkyon/services/BindApplicationService.java index 2b302f8e..f2494122 100644 --- a/app/src/main/java/io/halkyon/services/BindApplicationService.java +++ b/app/src/main/java/io/halkyon/services/BindApplicationService.java @@ -12,18 +12,18 @@ import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; +import org.jboss.logging.Logger; + import io.halkyon.exceptions.ClusterConnectException; -import io.halkyon.model.Application; -import io.halkyon.model.Claim; -import io.halkyon.model.Credential; -import io.halkyon.model.CredentialParameter; -import io.halkyon.model.Service; +import io.halkyon.model.*; import io.halkyon.utils.StringUtils; import io.quarkus.vault.VaultKVSecretEngine; @ApplicationScoped public class BindApplicationService { + private static final Logger LOG = Logger.getLogger(BindApplicationService.class); + public static final String TYPE_KEY = "type"; public static final String URL_KEY = "url"; public static final String HOST_KEY = "host"; @@ -31,6 +31,8 @@ public class BindApplicationService { public static final String USERNAME_KEY = "username"; public static final String PASSWORD_KEY = "password"; + public static final String DATABASE_KEY = "database"; + public static final String VAULT_KV_PATH_KEY = "vault-path"; @Inject @@ -44,6 +46,10 @@ public void unBindApplication(Claim claim) throws ClusterConnectException { deleteSecretInNamespace(claim); removeIngressHostFromApplication(claim); rolloutApplication(claim); + // TODO: Test should be improved to test if the service has been deployed using Crossplane + if (claim.service.installable) { + deleteCrossplaneHelmRelease(claim); + } } private void removeIngressHostFromApplication(Claim claim) { @@ -68,6 +74,8 @@ public void bindApplication(Claim claim) throws ClusterConnectException { app.ingress = getIngressHost(app); app.persist(); } + } else { + LOG.infof("Credential: %s; url: %s ", credential.vaultKvPath, url); } } @@ -87,30 +95,59 @@ private void unMountSecretVolumeEnvInApplication(Claim claim) throws ClusterConn kubernetesClientService.unMountSecretVolumeEnvInApplication(claim); } - private void createSecretForApplication(Claim claim, Credential credential, String url) - throws ClusterConnectException { - String username = credential.username; - String password = credential.password; + public void createCrossplaneHelmRelease(Cluster cluster, Service service) throws ClusterConnectException { + kubernetesClientService.createCrossplaneHelmRelease(cluster, service); + } - if (StringUtils.isNotEmpty(credential.vaultKvPath)) { - Map vaultSecret = kvSecretEngine.readSecret(credential.vaultKvPath); - Set usernames = vaultSecret.keySet(); - username = usernames.iterator().next(); - password = vaultSecret.get(username); - } + public void deleteCrossplaneHelmRelease(Claim claim) throws ClusterConnectException { + kubernetesClientService.deleteRelease(claim); + } + private void createSecretForApplication(Claim claim, Credential credential, String url) + throws ClusterConnectException { Map secretData = new HashMap<>(); secretData.put(TYPE_KEY, toBase64(claim.type)); secretData.put(HOST_KEY, toBase64(getHostFromUrl(url))); secretData.put(PORT_KEY, toBase64(getPortFromUrl(url))); secretData.put(URL_KEY, toBase64(url)); - secretData.put(USERNAME_KEY, toBase64(username)); - secretData.put(PASSWORD_KEY, toBase64(password)); - for (CredentialParameter param : credential.params) { - secretData.put(param.paramName, toBase64(param.paramValue)); + String username = ""; + String password = ""; + String database = ""; + + if (StringUtils.isNotEmpty(credential.username) && StringUtils.isNotEmpty(credential.password)) { + username = credential.username; + password = credential.password; + for (CredentialParameter param : credential.params) { + secretData.put(param.paramName, toBase64(param.paramValue)); + } } + if (StringUtils.isNotEmpty(credential.vaultKvPath)) { + Map vaultSecret = kvSecretEngine.readSecret(credential.vaultKvPath); + Set vaultSet = vaultSecret.keySet(); + for (String key : vaultSet) { + if (key.equals(USERNAME_KEY)) { + username = vaultSecret.get(USERNAME_KEY); + credential.username = username; + } else if (key.equals(PASSWORD_KEY)) { + password = vaultSecret.get(PASSWORD_KEY); + credential.password = password; + } else if (key.equals(DATABASE_KEY)) { + database = vaultSecret.get(DATABASE_KEY); + } else { + secretData.put(key, vaultSecret.get(key)); + CredentialParameter credentialParameter = new CredentialParameter(); + credentialParameter.paramName = key; + credentialParameter.paramValue = vaultSecret.get(key); + credential.params.add(credentialParameter); + } + } + } + secretData.put(USERNAME_KEY, toBase64(username)); + secretData.put(PASSWORD_KEY, toBase64(password)); + secretData.put(DATABASE_KEY, toBase64(database)); + kubernetesClientService.mountSecretInApplication(claim, secretData); } @@ -125,17 +162,29 @@ private Credential getFirstCredentialFromService(Service service) { private String generateUrlByClaimService(Claim claim) { Application application = claim.application; Service service = claim.service; + LOG.infof("Application cluster name: %s", application.cluster.name == null ? "" : application.cluster.name); + LOG.infof("Application namespace: %s", application.name == null ? "" : application.namespace); + + LOG.infof("Service cluster: %s", service.cluster == null ? "" : service.cluster); + LOG.infof("Service name: %s", service.name == null ? "" : service.name); + LOG.infof("Service namespace: %s", service.namespace == null ? "" : service.namespace); + LOG.infof("Service port: %s", service.getPort() == null ? "" : service.getPort()); + LOG.infof("Service protocol: %s", service.getProtocol() == null ? "" : service.getProtocol()); + if (Objects.equals(application.cluster, service.cluster) && Objects.equals(application.namespace, service.namespace)) { + LOG.info("Rule 1: app + service within same ns, cluster"); // rule 1: app + service within same ns, cluster // -> app can access the service using: protocol://service_name:port return String.format("%s://%s:%s", service.getProtocol(), service.name, service.getPort()); } else if (Objects.equals(application.cluster, service.cluster)) { + LOG.info("Rule 2: app + service in different ns, same cluster"); // rule 2: app + service in different ns, same cluster // -> app can access the service using: protocol://service_name.namespace:port return String.format("%s://%s.%s:%s", service.getProtocol(), service.name, service.namespace, service.getPort()); } else if (StringUtils.isNotEmpty(service.externalEndpoint)) { + LOG.info("Rule 2: rule 3 and 4: app + service running in another cluster using external IP"); // rule 3 and 4: app + service running in another cluster using external IP // -> app can access the service using: protocol://service-external-ip:port return String.format("%s://%s:%s", service.getProtocol(), service.externalEndpoint, service.getPort()); diff --git a/app/src/main/java/io/halkyon/services/KubernetesClientService.java b/app/src/main/java/io/halkyon/services/KubernetesClientService.java index 4fdf0b5c..d6a32055 100644 --- a/app/src/main/java/io/halkyon/services/KubernetesClientService.java +++ b/app/src/main/java/io/halkyon/services/KubernetesClientService.java @@ -2,11 +2,7 @@ import static io.halkyon.utils.StringUtils.equalsIgnoreCase; -import java.util.List; -import java.util.Locale; -import java.util.Map; -import java.util.Objects; -import java.util.Optional; +import java.util.*; import java.util.regex.Pattern; import jakarta.enterprise.context.ApplicationScoped; @@ -14,6 +10,8 @@ import org.jboss.logging.Logger; +import io.crossplane.helm.v1beta1.Release; +import io.crossplane.helm.v1beta1.ReleaseBuilder; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.HasMetadata; import io.fabric8.kubernetes.api.model.KubernetesResourceList; @@ -33,7 +31,9 @@ import io.halkyon.model.Application; import io.halkyon.model.Claim; import io.halkyon.model.Cluster; +import io.halkyon.model.ServiceDiscovered; import io.halkyon.utils.StringUtils; +import io.quarkus.panache.common.Sort; @ApplicationScoped public class KubernetesClientService { @@ -52,6 +52,35 @@ public List getDeploymentsInCluster(Cluster cluster) throws ClusterC return filterByCluster(getClientForCluster(cluster).apps().deployments(), cluster); } + /** + * + * Return the list of the services available for each cluster by excluding the black listed namespaces + */ + public List discoverServicesInCluster() throws ClusterConnectException { + List serviceCatalog = io.halkyon.model.Service.findAll(Sort.ascending("name")).list(); + List servicesDiscovered = new ArrayList(); + + for (Cluster cluster : Cluster.listAll()) { + List kubernetesServices = filterByCluster(getClientForCluster(cluster).services(), cluster); + for (Service service : kubernetesServices) { + for (io.halkyon.model.Service serviceIdentity : serviceCatalog) { + boolean found = service.getSpec().getPorts().stream() + .anyMatch(p -> equalsIgnoreCase(p.getProtocol(), serviceIdentity.getProtocol()) + && String.valueOf(p.getPort()).equals(serviceIdentity.getPort())); + if (found) { + ServiceDiscovered serviceDiscovered = new ServiceDiscovered(); + serviceDiscovered.clusterName = cluster.name; + serviceDiscovered.namespace = service.getMetadata().getNamespace(); + serviceDiscovered.kubernetesSvcName = service.getMetadata().getName(); + serviceDiscovered.serviceIdentity = serviceIdentity; + servicesDiscovered.add(serviceDiscovered); + } + } + } + } + return servicesDiscovered; + } + /** * Check whether a service with : is running in the cluster. Exclude the services installed under * listed namespaces @@ -82,6 +111,20 @@ public void deleteSecretInNamespace(Claim claim) throws ClusterConnectException .withName(secretName).withNamespace(application.namespace).endMetadata().build()); } + /** + * Delete the Crossplane Release + */ + public void deleteRelease(Claim claim) throws ClusterConnectException { + // TODO: To be reviewed in order to user the proper cluster + KubernetesClient client = getClientForCluster(claim.application.cluster); + LOG.infof("Application cluster: ", claim.application.cluster); + LOG.infof("Helm chart name: ", claim.service.helmChart); + ReleaseBuilder release = new ReleaseBuilder(); + release.withApiVersion("helm.crossplane.io").withKind("v1beta1").withNewMetadata() + .withName(claim.service.helmChart).endMetadata(); + client.resource(release.build()).delete(); + } + /** * Add the secret into the specified cluster and namespace. */ @@ -180,6 +223,37 @@ public String getIngressHost(Application application) throws ClusterConnectExcep } } + /** + * Create the Crossplane Helm Release CR + */ + public void createCrossplaneHelmRelease(Cluster cluster, io.halkyon.model.Service service) + throws ClusterConnectException { + + // Create Release object + ReleaseBuilder release = new ReleaseBuilder(); + release.withApiVersion("helm.crossplane.io").withKind("v1beta1").withNewMetadata().withName(service.helmChart) + .endMetadata().withNewSpec().withNewV1beta1ForProvider().addNewV1beta1Set().withName("auth.database") + .withValue("fruits_database").endV1beta1Set().addNewV1beta1Set().withName("auth.username") + .withValue("healthy").endV1beta1Set().addNewV1beta1Set().withName("auth.password").withValue("healthy") + .endV1beta1Set().withNamespace("db").withWait(true).withNewV1beta1Chart().withName(service.helmChart) + .withRepository(service.helmRepo).withVersion(service.helmChartVersion).endV1beta1Chart() + .endV1beta1ForProvider().withNewV1beta1ProviderConfigRef().withName("helm-provider") + .endV1beta1ProviderConfigRef().endSpec(); + + // TODO: Logic to be reviewed as we have 2 use cases: + // Service(s) instances has been discovered in cluster x.y.z + // Service is not yet installed and will be installed in cluster x.y.z and namespace t.u.v + if (cluster != null) { + client = getClientForCluster(cluster); + } else { + client = getClientForCluster(service.cluster); + } + + MixedOperation, Resource> releaseClient = client + .resources(Release.class); + releaseClient.resource(release.build()).create(); + } + @Transactional public KubernetesClient getClientForCluster(Cluster cluster) throws ClusterConnectException { try { diff --git a/app/src/main/java/io/halkyon/services/ServiceDiscoveryJob.java b/app/src/main/java/io/halkyon/services/ServiceDiscoveryJob.java index 666136e8..c4fdf71d 100644 --- a/app/src/main/java/io/halkyon/services/ServiceDiscoveryJob.java +++ b/app/src/main/java/io/halkyon/services/ServiceDiscoveryJob.java @@ -70,7 +70,11 @@ public boolean linkServiceInCluster(Service service) { service.available = false; List clusters = Cluster.listAll(); for (Cluster cluster : clusters) { + LOG.debugf("Checking after the service: %s, %s, %s", service.name, service.getProtocol(), + service.getPort()); if (updateServiceIfFoundInCluster(service, cluster)) { + LOG.infof("Service: %s, %s found within namespace: %s of the cluster: %s", service.name, + service.getPort(), service.namespace, service.cluster); updated = true; break; } diff --git a/app/src/main/resources/templates/index/home.html b/app/src/main/resources/templates/index/home.html index 0470d9a5..fc8f976d 100644 --- a/app/src/main/resources/templates/index/home.html +++ b/app/src/main/resources/templates/index/home.html @@ -25,73 +25,77 @@

Welcome to Primaza

- -
-
- +
- Register -
the service(s) to search for
+ Services catalog +
to be claimed
-
-
- -
-
- Set their -
credential
+ And their +
credentials
- +
+
+
+ +
-
+ +
- Discover -
the services available
+ Services +
available ... or not
-
-
-
-
Application(s) -
that primaza found in the clusters
+
running in the cluster(s)
- +
- Claim -
to acquire a service
+ Manage claims +
in the clusters
+
+ +
{/body} {/include} \ No newline at end of file diff --git a/app/src/main/resources/templates/services/form.html b/app/src/main/resources/templates/services/form.html index 17bac8da..6644902a 100644 --- a/app/src/main/resources/templates/services/form.html +++ b/app/src/main/resources/templates/services/form.html @@ -1,77 +1,136 @@ {@java.lang.Integer items} {#include base} - {#title}Service{/title} - {#body} -
- {#if service.id == null } -

New Service

-
- {#else} -

Update Service

- - {/if} -
- + {#title}Service{/title} + {#body} +
+ {#if service.id == null } +

New Service

+ + {#else} +

Update Service

+ + {/if} +
+
+
+ +
+ +
+
+
+ +
+ +
+
+
+ +
+ +
+
+
+ +
+ +
+
+ OR +
+ +
+ +
+
+
+
+
+ +
+ +
+
+
+ +
+ +
+
+
+
- + +
+
+
+ +
+ +
+
+
+ +
+ +
+
+
+
+
+
+
+
+ +
+
-
-
- -
- -
-
-
- -
- -
-
-
- -
- -
-
- OR -
- -
- -
-
-
-
-
- -
-
- -
-
- Back -
-
-
-
- {/body} +
+ +
+
+ Back +
+
+
+ + {/body} {/include} \ No newline at end of file diff --git a/app/src/main/resources/templates/services/listDiscovered.html b/app/src/main/resources/templates/services/listDiscovered.html index 607941df..eefd5b54 100644 --- a/app/src/main/resources/templates/services/listDiscovered.html +++ b/app/src/main/resources/templates/services/listDiscovered.html @@ -1,6 +1,6 @@ {@java.lang.Integer items} {#include base} - {#title}Available Services{/title} + {#title}Discovered Services{/title} {#body}
{#include services/listDiscoveredTable.html services=services items=items /} diff --git a/app/src/main/resources/templates/services/listDiscoveredTable.html b/app/src/main/resources/templates/services/listDiscoveredTable.html index fea40a44..9f5d132e 100644 --- a/app/src/main/resources/templates/services/listDiscoveredTable.html +++ b/app/src/main/resources/templates/services/listDiscoveredTable.html @@ -12,21 +12,21 @@ {#for service in services} - {service.name} - {service.version} + {service.kubernetesSvcName} + {service.serviceIdentity.version} - {#if service.isStandalone()} - {service.externalEndpoint} + {#if service.serviceIdentity.isStandalone()} + {service.serviceIdentity.externalEndpoint} {#else} - {service.protocol}://{service.name}.{service.namespace}:{service.port} + {service.serviceIdentity.protocol}://{service.serviceIdentity.name}.{service.namespace}:{service.serviceIdentity.port} {/if} {service.namespace} - {#if service.isStandalone()} - Standalone + {#if service.serviceIdentity.isStandalone()} + Standalone {#else} - {service.cluster.name} + {service.clusterName} {/if} diff --git a/app/src/test/java/io/halkyon/ApplicationsPageTest.java b/app/src/test/java/io/halkyon/ApplicationsPageTest.java index ff19dfb5..5b7c0f6c 100644 --- a/app/src/test/java/io/halkyon/ApplicationsPageTest.java +++ b/app/src/test/java/io/halkyon/ApplicationsPageTest.java @@ -325,20 +325,23 @@ public void testBindApplicationGettingCredentialsFromVault() throws ClusterConne String appName = prefix + "app"; String username = "user1"; String password = "pass1"; + String database = "database1"; // mock data configureMockServiceFor(clusterName, "testbind", "1111", "ns1"); configureMockApplicationFor(clusterName, appName, "image2", "ns1"); // create data Service service = createService(serviceName, "version", "type", "testbind:1111"); - createCredential(credentialName, service.id, "user1", "pass1", "myapps/app"); + createCredential(credentialName, service.id, null, null, "myapps/app"); createCluster(clusterName, "host:port"); Map newsecrets = new HashMap<>(); - newsecrets.put(username, password); + newsecrets.put("username", username); + newsecrets.put("password", password); + newsecrets.put("database", database); kvSecretEngine.writeSecret("myapps/app", newsecrets); Map secret = kvSecretEngine.readSecret("myapps/app"); String secrets = new TreeMap<>(secret).toString(); - assertEquals("{user1=pass1}", secrets); + assertEquals("{database=database1, password=pass1, username=user1}", secrets); serviceDiscoveryJob.execute(); // this action will change the service to available createClaim(claimName, serviceName + "-version"); @@ -368,6 +371,7 @@ public void testBindApplicationGettingCredentialsFromVault() throws ClusterConne assertNotNull(actualClaim.credential); assertEquals("user1", actualClaim.credential.username); assertEquals("pass1", actualClaim.credential.password); + assertEquals(ClaimStatus.BOUND.toString(), actualClaim.status); // protocol://service_name:port diff --git a/crossplane.md b/crossplane.md index 253f2144..065d7804 100644 --- a/crossplane.md +++ b/crossplane.md @@ -24,15 +24,15 @@ cat <0.0.0-0" -# pullSecretRef: -# name: museum-creds -# namespace: default -# url: "https://charts.bitnami.com/bitnami/wordpress-9.3.19.tgz" - namespace: wordpress -# insecureSkipTLSVerify: true -# skipCreateNamespace: true -# wait: true -# skipCRDs: true - values: - service: - type: ClusterIP + version: 11.9.1 + namespace: db + skipCreateNamespace: false + wait: true set: - - name: param1 - value: value2 -# valuesFrom: -# - configMapKeyRef: -# key: values.yaml -# name: default-vals -# namespace: wordpress -# optional: false -# - secretKeyRef: -# key: svalues.yaml -# name: svals -# namespace: wordpress -# optional: false -# connectionDetails: -# - apiVersion: v1 -# kind: Service -# name: wordpress-example -# namespace: wordpress -# fieldPath: spec.clusterIP -# #fieldPath: status.loadBalancer.ingress[0].ip -# toConnectionSecretKey: ip -# - apiVersion: v1 -# kind: Service -# name: wordpress-example -# namespace: wordpress -# fieldPath: spec.ports[0].port -# toConnectionSecretKey: port -# - apiVersion: v1 -# kind: Secret -# name: wordpress-example -# namespace: wordpress -# fieldPath: data.wordpress-password -# toConnectionSecretKey: password -# - apiVersion: v1 -# kind: Secret -# name: manual-api-secret -# namespace: wordpress -# fieldPath: data.api-key -# toConnectionSecretKey: api-key -# # this secret created manually (not via Helm chart), so skip 'part of helm release' check -# skipPartOfReleaseCheck: true -# writeConnectionSecretToRef: -# name: wordpress-credentials -# namespace: crossplane-system + - name: auth.username + value: healthy + - name: auth.password + value: healthy + - name: auth.database + value: fruits_database providerConfigRef: name: helm-provider EOF ``` +>**Note**: You can deploy the release file using the command `kubectl apply -f ./scripts/data/release-postgresql.yml` + +## Deploy a Helm DB chart using Composite and Compose resources + +Instead of deploying a Helm Release to request directly to the Crossplane Helm provider to deploy a Helm chart, we will now use +a `Database` composite resource (aka our own CRD) and a `Composition` resource containing the template and patches to generate the needed resources: `Release`, etc + +Deploy first the Database CRD and composition resource +```bash +kubectl apply -f ./crossplane/database-helm/composite.yml +kubectl apply -f ./crossplane/database-helm/composition.yml +``` + +To install by example a postgresql helm chart under the namespace `db` using the version `11.9.1`, creat and deploy the following resource: +```bash +cat < +API Version: snowdrop.dev/v1alpha1 +Kind: Database +... +Spec: + Parameters: + Namespace: db + Type: postgresql + Version: 11.9.1 +... +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal CompositionUpdatePolicy 15s defined/compositeresourcedefinition.apiextensions.crossplane.io Default composition update policy has been selected + Normal PublishConnectionSecret 15s defined/compositeresourcedefinition.apiextensions.crossplane.io Successfully published connection details + Normal ComposeResources 15s (x2 over 15s) defined/compositeresourcedefinition.apiextensions.crossplane.io Composed resource "postgresql-helm-release" is not yet ready + Normal SelectComposition 14s (x4 over 15s) defined/compositeresourcedefinition.apiextensions.crossplane.io Successfully selected composition + Normal ComposeResources 14s (x4 over 15s) defined/compositeresourcedefinition.apiextensions.crossplane.io Successfully composed resources +``` + +A podtgresql pod should be created soon: +```bash +kubectl get pod -lapp.kubernetes.io/name=postgresql -n db +NAME READY STATUS RESTARTS AGE +postgresql-db-0 1/1 Running 0 2m41s +``` +To clean up: + +```bash +kubectl delete -f ./crossplane/database-helm +``` + + ## How to use Upbound Documentation page: https://docs.upbound.io/uxp/install/ diff --git a/crossplane/database-helm/composite.yml b/crossplane/database-helm/composite.yml new file mode 100644 index 00000000..62578b6e --- /dev/null +++ b/crossplane/database-helm/composite.yml @@ -0,0 +1,56 @@ +apiVersion: apiextensions.crossplane.io/v1 +kind: CompositeResourceDefinition +metadata: + name: databases.snowdrop.dev +spec: + group: snowdrop.dev + names: + kind: Database + plural: databases + shortNames: + - "db" + - "dbs" + versions: + - additionalPrinterColumns: + - jsonPath: .spec.parameters.size + name: SIZE + type: string + - jsonPath: .spec.parameters.namespace + name: NAMESPACE + type: string + - jsonPath: .spec.parameters.type + name: TYPE + type: string + - jsonPath: .spec.parameters.version + name: VERSION + type: string + name: v1alpha1 + served: true + referenceable: true + schema: + openAPIV3Schema: + type: object + properties: + spec: + type: object + properties: + parameters: + type: object + properties: + size: + type: string + namespace: + type: string + type: + type: string + version: + type: string + required: + - type + - version + - namespace + required: + - parameters + + + \ No newline at end of file diff --git a/crossplane/database-helm/composition.yml b/crossplane/database-helm/composition.yml new file mode 100644 index 00000000..a5dfd55a --- /dev/null +++ b/crossplane/database-helm/composition.yml @@ -0,0 +1,93 @@ +apiVersion: apiextensions.crossplane.io/v1 +kind: Composition +metadata: + name: db.local.snowdrop.dev + labels: + type: dev + provider: local +spec: + writeConnectionSecretsToNamespace: crossplane-system + compositeTypeRef: + apiVersion: snowdrop.dev/v1alpha1 + kind: Database + resources: + - name: postgresql-helm-release + base: + apiVersion: helm.crossplane.io/v1beta1 + kind: Release + metadata: + annotations: + crossplane.io/external-name: # patched + spec: + rollbackLimit: 3 + forProvider: + namespace: # patched + chart: + repository: https://charts.bitnami.com/bitnami + name: # patched + version: # patched + providerConfigRef: + name: helm-provider + patches: + - fromFieldPath: spec.parameters.namespace + toFieldPath: spec.forProvider.namespace + - fromFieldPath: spec.parameters.version + toFieldPath: spec.forProvider.chart.version + - fromFieldPath: spec.parameters.type + toFieldPath: spec.forProvider.chart.name + - fromFieldPath: metadata.name + toFieldPath: metadata.annotations[crossplane.io/external-name] + policy: + fromFieldPath: Required + - fromFieldPath: metadata.name + toFieldPath: metadata.name + transforms: + - type: string + string: + fmt: "%s-postgresql" + readinessChecks: + - type: MatchString + fieldPath: status.atProvider.state + matchString: deployed + - name: secret + base: + apiVersion: kubernetes.crossplane.io/v1alpha1 + kind: Object + spec: + forProvider: + manifest: + apiVersion: v1 + kind: Secret + metadata: + name: "db-secret" + namespace: #patched + data: + database: fruits_database + username: healthy + password: healthy + providerConfigRef: + name: kubernetes-provider + patches: + - fromFieldPath: spec.parameters.namespace + toFieldPath: spec.forProvider.manifest.metadata.namespace + - fromFieldPath: spec.forProvider.data.database + toFieldPath: spec.forProvider.data.database + transforms: + - type: string + string: + type: Convert + convert: ToBase64 + - fromFieldPath: spec.forProvider.data.username + toFieldPath: spec.forProvider.data.username + transforms: + - type: string + string: + type: Convert + convert: ToBase64 + - fromFieldPath: spec.forProvider.data.password + toFieldPath: spec.forProvider.data.password + transforms: + - type: string + string: + type: Convert + convert: ToBase64 \ No newline at end of file diff --git a/crossplane/database-helm/database.yml b/crossplane/database-helm/database.yml new file mode 100644 index 00000000..279dc132 --- /dev/null +++ b/crossplane/database-helm/database.yml @@ -0,0 +1,13 @@ +apiVersion: snowdrop.dev/v1alpha1 +kind: Database +metadata: + name: postgresql +spec: + compositionSelector: + matchLabels: + provider: local + type: dev + parameters: + type: postgresql + version: 11.9.1 + namespace: db diff --git a/pom.xml b/pom.xml index 046d4453..8aa83c25 100644 --- a/pom.xml +++ b/pom.xml @@ -179,7 +179,6 @@ app - diff --git a/scripts/crossplane.sh b/scripts/crossplane.sh index 33bc52b2..1ae5a38a 100755 --- a/scripts/crossplane.sh +++ b/scripts/crossplane.sh @@ -9,7 +9,7 @@ source ${SCRIPTS_DIR}/play-demo.sh export TYPE_SPEED=400 NO_WAIT=true -function help() { +function usage() { fmt "" fmt "Usage: $0 [option]" fmt "" @@ -20,63 +20,115 @@ function help() { fmt "\tdeploy \tInstall the crossplane helm chart and RBAC" fmt "\tremove \tRemove the crossplane helm chart" fmt "\thelm-provider \tDeploy the crossplane Helm provider and configure it" + fmt "\tkube-provider \tDeploy the crossplane Kubernetes provider and configure it" } function deploy() { - helm upgrade -i crossplane \ - crossplane \ + helm repo add crossplane-stable https://charts.crossplane.io/stable + helm repo update crossplane-stable + helm install crossplane \ -n crossplane-system \ --create-namespace \ - --repo https://charts.crossplane.io/stable + crossplane-stable/crossplane kubectl rollout status deployment/crossplane -n crossplane-system -} -function helmProvider() { - p "Installing the Helm provider ..." + p "Configure the ControllerConfig resource to set the debug arg" cat < local-kind-kubeconfig" pe "k cp local-kind-kubeconfig ${NAMESPACE}/${POD_NAME:4}:/tmp/local-kind-kubeconfig -c primaza-app" - NS_TO_BE_EXCLUDED=${NS_TO_BE_EXCLUDED:-default,kube-system,ingress,primaza,pipelines-as-code,tekton-pipelines,tekton-pipelines-resolvers,vault,local-path-storage,kube-node-lease} + NS_TO_BE_EXCLUDED=${NS_TO_BE_EXCLUDED:-default,kube-system,ingress,primaza,pipelines-as-code,tekton-pipelines,tekton-pipelines-resolvers,vault,local-path-storage,local-path-storage,kube-node-lease} RESULT=$(k exec -i $POD_NAME -c primaza-app -n ${NAMESPACE} -- sh -c "curl -X POST -H 'Content-Type: multipart/form-data' -H 'HX-Request: true' -F name=local-kind -F excludedNamespaces=$NS_TO_BE_EXCLUDED -F environment=DEV -F url=$KIND_URL -F kubeConfig=@/tmp/local-kind-kubeconfig -s -i localhost:8080/clusters") if [ "$RESULT" = *"500 Internal Server Error"* ] then @@ -110,13 +118,20 @@ function deploy() { } function localDeploy() { + ENVARGS="" + if [[ -n "${VAULT_URL}" ]]; then ENVARGS+="--set app.envs.vault.url=${VAULT_URL}"; fi + if [[ -n "${VAULT_USER}" ]]; then ENVARGS+="--set app.envs.vault.user=${VAULT_USER}"; fi + if [[ -n "${VAULT_PASSWORD}" ]]; then ENVARGS+="--set app.envs.vault.password=${VAULT_PASSWORD}"; fi + pe "k create namespace ${NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -" pe "k config set-context --current --namespace=${NAMESPACE}" pe "helm install --devel primaza-app \ --dependency-update \ ${PROJECT_DIR}/target/helm/kubernetes/primaza-app \ -n ${NAMESPACE} \ - --set app.image=localhost:5000/${REGISTRY_GROUP}/primaza-app:${IMAGE_VERSION} 2>&1 1>/dev/null" + --set app.image=${PRIMAZA_IMAGE_NAME} \ + ${ENVARGS} \ + 2>&1 1>/dev/null" pe "k wait -n ${NAMESPACE} \ --for=condition=ready pod \ @@ -150,11 +165,12 @@ function remove() { } case $1 in - install_kind) "$@"; exit;; + -h) primazaUsage; exit;; build) "$@"; exit;; deploy) "$@"; exit;; - localDeploy) "$@"; exit;; + localdeploy) localDeploy; exit;; remove) "$@"; exit;; + *) primazaUsage; exit;; esac remove diff --git a/scripts/vault.sh b/scripts/vault.sh index 99c93ad9..2e564554 100755 --- a/scripts/vault.sh +++ b/scripts/vault.sh @@ -221,7 +221,7 @@ esac install # DO NOT WORK -> kubectl rollout status statefulset/vault -n vault -sleep 60 +sleep 240 unseal login #enableKV1SecretEngine