From 053d83216ec0240cc1ccc24d2db847c404cc006a Mon Sep 17 00:00:00 2001 From: GitHub Action Date: Tue, 4 Jun 2024 14:33:22 +0000 Subject: [PATCH] Deployed 47df7fd with MkDocs version: 1.6.0 --- _partials/destroy/index.html | 9 +++++---- search/search_index.json | 2 +- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/_partials/destroy/index.html b/_partials/destroy/index.html index 28d88d87..f05f4e25 100644 --- a/_partials/destroy/index.html +++ b/_partials/destroy/index.html @@ -1,6 +1,7 @@ Destroy - Amazon Crossplane Blueprints

Destroy

terraform destroy -target="module.crossplane" -auto-approve
-terraform destroy -target="module.eks_blueprints_addons" -auto-approve
-terraform destroy -target="module.eks" -auto-approve
-terraform destroy -target="module.vpc" -auto-approve
-terraform destroy -auto-approve
+terraform destroy -target="module.gatekeeper" -auto-approve
+terraform destroy -target="module.eks_blueprints_addons" -auto-approve
+terraform destroy -target="module.eks" -auto-approve
+terraform destroy -target="module.vpc" -auto-approve
+terraform destroy -auto-approve
 
\ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index 22575a47..e73f27ca 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":""},{"location":"#blueprints-for-crossplane-on-amazon-eks","title":"Blueprints for Crossplane on Amazon EKS","text":"

Note: AWS Blueprints for Crossplane on Amazon Elastic Kubernetes Service is under active development and should be considered a pre-production framework.

Welcome to the AWS Crossplane Blueprints.

"},{"location":"#introduction","title":"Introduction","text":"

AWS Crossplane Blueprints is an open source repo to bootstrap Amazon Elastic Kubernetes Service Clusters. and provision AWS resources with a library of Crossplane Compositions (XRs) with Composite Resource Definitions (XRDs).

If you are new to Crossplane, it is highly recommended to get yourself familiarized with Crossplane concepts. The official documentation and this blog post are good starting points.

Compositions in this repository enable platform teams to define and offer bespoke AWS infrastructure APIs to the teams of application developers based on predefined Composite Resources (XRs), encompassing one or more of AWS Managed Resources (MRs)

"},{"location":"#features","title":"Features","text":"

\u2705 Bootstrap Amazon EKS Cluster and Crossplane with Terraform \\ \u2705 Bootstrap Amazon EKS Cluster and Crossplane with eksctl \\ \u2705 AWS Provider - Crossplane Compositions for AWS Services \\ \u2705 Upbound AWS Provider - Upbound Crossplane Compositions for AWS Services \\ \u2705 AWS IRSA on EKS - AWS Provider Config with IRSA enabled \\ \u2705 Patching 101 - Learn how patches work. \u2705 Example deployment patterns for Composite Resources (XRs) for AWS Provider\\ \u2705 Example deployment patterns for Crossplane Managed Resources (MRs)

"},{"location":"#getting-started","title":"Getting Started","text":"

\u2705 Bootstrap EKS Cluster

This repo provides multiple options to bootstrap Amazon EKS Clusters with Crossplane and AWS Providers. Checkout the following README for full deployment configuration

\u2705 Configure the EKS cluster

Enable IRSA support for your EKS cluster for the necessary permissions to spin up other AWS services. Depending on the provider, refer to the bootstrap README for this configuration.

\u2705 Deploy the Examples

With the setup complete, you can then follow instructions on deploying crossplane compositions or managed resources you want to experiment with. Keep in mind that the list of compositions and managed resources in this repository are evolving.

\u2705 Work with nested compositions.

Compositions can be nested to further define and abstract application specific needs.

\u2705 Work with external secrets.

Crossplane can be configured to publish secrets external to the cluster in which it runs.

\u2705 Check out the RDS day 2 operation doc

\u2705 Checkout example Gatekeeper configurations.

\u2705 Upbound AWS provider examples

"},{"location":"#learn-more","title":"Learn More","text":""},{"location":"#debugging","title":"Debugging","text":"

For debugging Compositions, CompositionResourceDefinitions, etc, please see the debugging guide.

"},{"location":"#adopters","title":"Adopters","text":"

A list of publicly known users of the Crossplane Blueprints for Amazon EKS project can be found in ADOPTERS.md.

"},{"location":"#security","title":"Security","text":"

See CONTRIBUTING for more information.

"},{"location":"#license","title":"License","text":"

This library is licensed under the Apache 2.0 License.

"},{"location":"faq/","title":"Frequently Asked Questions","text":""},{"location":"faq/#timeouts-on-destroy","title":"Timeouts on destroy","text":"

Customers who are deleting their environments using terraform destroy may see timeout errors when VPCs are being deleted. This is due to a known issue in the vpc-cni

Customers may face a situation where ENIs that were attached to EKS managed nodes (same may apply to self-managed nodes) are not being deleted by the VPC CNI as expected which leads to IaC tool failures, such as:

The current recommendation is to execute cleanup in the following order:

  1. delete all pods that have been created in the cluster.
  2. add delay/ wait
  3. delete VPC CNI
  4. delete nodes
  5. delete cluster
"},{"location":"getting-started/","title":"Getting Started","text":"

This getting started guide will help you bootstrap your first cluster using Crossplane Blueprints.

"},{"location":"getting-started/#prerequisites","title":"Prerequisites","text":"

Ensure that you have installed the following tools locally:

"},{"location":"getting-started/#deploy","title":"Deploy","text":""},{"location":"getting-started/#eksctl","title":"eksctl","text":"
  1. TBD
"},{"location":"getting-started/#terraform","title":"terraform","text":"
  1. For consuming Crossplane Blueprints, please see the Getting Started section. For exploring and trying out the patterns provided, please clone the project locally to quickly get up and running with a pattern. After cloning the project locally, cd into the pattern directory of your choice.

  2. To provision the pattern, the typical steps of execution are as follows:

    terraform init\nterraform apply -target=\"module.vpc\" -auto-approve\nterraform apply -target=\"module.eks\" -auto-approve\nterraform apply -target=\"module.eks_blueprints_addons\" -auto-approve\nterraform apply -target=\"module.crossplane\" -auto-approve\nterraform apply -auto-approve\n
  3. Once all of the resources have successfully been provisioned, the following command can be used to update the kubeconfig on your local machine and allow you to interact with your EKS Cluster using kubectl.

    aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME> --alias <CLUSTER_NAME>\n

    Terraform outputs

    The examples will output the aws eks update-kubeconfig ... command as part of the Terraform apply output to simplify this process for users

  4. Once you have updated your kubeconfig, you can verify that you are able to interact with your cluster by running the following command:

    kubectl get nodes\n

    This should return a list of the node(s) running in the cluster created. If any errors are encountered, please re-trace the steps above and consult the pattern's README.md for more details on any additional/specific steps that may be required.

"},{"location":"getting-started/#destroy","title":"Destroy","text":"

To teardown and remove the resources created in the bootstrap, the typical steps of execution are as follows:

terraform destroy -target=\"module.crossplane\" -auto-approve\nterraform destroy -target=\"module.eks_blueprints_addons\" -auto-approve\nterraform destroy -target=\"module.eks\" -auto-approve\nterraform destroy -target=\"module.vpc\" -auto-approve\nterraform destroy -auto-approve\n

Resources created outside of Terraform

Some resources may have been created that Terraform is not aware of that will cause issues when attempting to clean up the pattern. Please see the destroy.md for more details.

"},{"location":"_partials/destroy/","title":"Destroy","text":"
terraform destroy -target=\"module.crossplane\" -auto-approve\nterraform destroy -target=\"module.eks_blueprints_addons\" -auto-approve\nterraform destroy -target=\"module.eks\" -auto-approve\nterraform destroy -target=\"module.vpc\" -auto-approve\nterraform destroy -auto-approve\n
"},{"location":"patterns/debugging/","title":"Debugging CompositeResourceDefinitions (XRD) and Compositions","text":""},{"location":"patterns/debugging/#composite-resources-and-claim-overview","title":"Composite resources and claim overview","text":"
    flowchart LR\n    subgraph \"Some namespace\"\n        direction LR\n        XRC[\"Claim\"]\n    end\n\n    subgraph \"Cluster Scoped\"\n        direction LR\n        XR(\"Composite Resource\")\n        MR1(\"Managed Resource \\n(e.g. RDS instance)\")\n        MR2(\"Managed Resouce \\n(e.g. IAM Role)\")\n    end\n    XR --> |\"spec.resourceRef\"| MR1\n    XR --> |\"spec.resourceRef\"| MR2\n    XRC --> |\"spec.resourceRef\"| XR\n
"},{"location":"patterns/debugging/#general-debugging-steps","title":"General debugging steps","text":"

Most error messages are logged to resources' event field. Whenever your Composite Resources are not getting provisioned, follow the following: 1. Get the events for the root resource using kubectl describe or kubectl get event 2. If there are errors in the events, address them. 3. If no errors, follow its sub-resources. kubectl get <KIND> <NAME> -o=jsonpath='{.spec.resourceRef}{\" \"}{.spec.resourceRefs}' | jq 4. Go back to step 1 using one of resources returned by step 3.

Note: Debugging is also enabled for the AWS provider pods. You may find it useful to check the logs for the provider pods for extra information on failures. You can also disable logging here.

# kubectl get pods -n crossplane-system\nNAME                                                READY   STATUS    RESTARTS   AGE\ncrossplane-5b6896bb4c-mjr8x                         1/1     Running   0          12d\ncrossplane-rbac-manager-7874897d59-fc9wf            1/1     Running   0          12d\nprovider-aws-f6a4a9bdba04-84ddf67474-z78nl          1/1     Running   0          12d\nprovider-kubernetes-cfae2275d58e-6b7bcf5bb5-2rjk2   1/1     Running   0          8d\n\n# For the AWS provider logs\n# kubectl -n crossplane-system logs provider-aws-f6a4a9bdba04-84ddf67474-z78nl | less\n\n# For Crossplane core logs\n# kubectl -n crossplane-system logs crossplane-5b6896bb4c-mjr8x  | less\n
"},{"location":"patterns/debugging/#debugging-example","title":"Debugging Example","text":""},{"location":"patterns/debugging/#composition","title":"Composition","text":"

An example application was deployed as a claim of a composite resource. Kind = ExampleApp. Name = example-application.

The example application never reaches available state.

  1. Run kubectl describe exampleapp example-application

    Status:\nConditions:\n    Last Transition Time:  2022-03-01T22:57:38Z\n    Reason:                Composite resource claim is waiting for composite resource to become Ready\n    Status:                False\n    Type:                  Ready\nEvents:                    <none>\n

  2. No error in events. Find its cluster scoped resource (composite resource).

    # kubectl get exampleapp example-application -o=jsonpath='{.spec.resourceRef}{\" \"}{.spec.resourceRefs}' | jq\n\n{\n  \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n  \"kind\": \"XExampleApp\",\n  \"name\": \"example-application-xqlsz\"\n}\n

  3. In the above output, we see the cluster scoped resource for this claim. Kind = XExampleApp name = example-application-xqlsz
  4. Get the cluster resource's event.
    # kubectl describe xexampleapp example-application-xqlsz\n\nEvents:\nType     Reason                   Age               From                                                             Message\n----     ------                   ----              ----                                                             -------\nNormal   PublishConnectionSecret  9s (x2 over 10s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  Successfully published connection details\nNormal   SelectComposition        6s (x6 over 11s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  Successfully selected composition\nWarning  ComposeResources         6s (x6 over 10s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  cannot render composed resource from resource template at index 3: cannot use dry-run create to name composed resource: an empty namespace may not be set during creation\nNormal   ComposeResources         6s (x6 over 10s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  Successfully composed resources\n
  5. We see errors in the events. It is complaining about not specifying namespace in its compositions. For this particular kind of error, we can get its sub-resources and check which one is not created.

    # kubectl get xexampleapp example-application-xqlsz -o=jsonpath='{.spec.resourceRef}{\" \"}{.spec.resourceRefs}' | jq\n[\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"XDynamoDBTable\",\n        \"name\": \"example-application-xqlsz-6j9nm\"\n    },\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"XIAMPolicy\",\n        \"name\": \"example-application-xqlsz-lp9wt\"\n    },\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"XIAMPolicy\",\n        \"name\": \"example-application-xqlsz-btwkn\"\n    },\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"IRSA\"\n    }\n]\n
    6. Notice the last element in the array does not have a name. When a resource in composition fails validation, the resource object is not created and will not have a name. For this particular issue, we need to specify the namespace for the IRSA resource.

"},{"location":"patterns/debugging/#composition-definition","title":"Composition Definition","text":"

Debugging Composition Definitions is similar to debugging Compositions.

  1. Get XRD
    # kubectl get xrd testing.awsblueprints.io\nNAME                       ESTABLISHED   OFFERED   AGE\ntesting.awsblueprints.io                           66s\n
  2. Notice its status it not established. We describe this XRD to get its events
    # kubectl describe xrd testing.awsblueprints.io\nEvents:\nType     Reason              Age                    From                                                             Message\n----     ------              ----                   ----                                                             -------\nNormal   ApplyClusterRoles   3m19s (x3 over 3m19s)  rbac/compositeresourcedefinition.apiextensions.crossplane.io     Applied RBAC ClusterRoles\nNormal   RenderCRD           18s (x9 over 3m19s)    defined/compositeresourcedefinition.apiextensions.crossplane.io  Rendered composite resource CustomResourceDefinition\nWarning  EstablishComposite  18s (x9 over 3m19s)    defined/compositeresourcedefinition.apiextensions.crossplane.io  cannot apply rendered composite resource CustomResourceDefinition: cannot create object: CustomResourceDefinition.apiextensions.k8s.io \"testing.awsblueprints.io\" is invalid: metadata.name: Invalid value: \"testing.awsblueprints.io\": must be spec.names.plural+\".\"+spec.group\n
  3. We see in the events that CRD cannot be generated for this XRD. In this case, we need to ensure the name is spec.names.plural+\".\"+spec.group
"},{"location":"patterns/debugging/#providers","title":"Providers","text":"

There are two ways to install providers in Crossplane. Using configuration.pkg.crossplane.io and provider.pkg.crossplane.io. In this repository, we use provider.pkg.crossplane.io. Note that if you define a configuration.pkg.crossplane.io object, Crossplane will create a provider.pkg.crossplane.io object. This object is managed by Crossplane. Please refer to this guide for more information about Crossplane Packages.

If you are experiencing provider issues, steps below are a good starting point.

  1. Check the status of provider object.

    # kubectl describe provider.pkg.crossplane.io provider-aws\nStatus:\n    Conditions:\n        Last Transition Time:  2022-08-04T16:19:44Z\n        Reason:                HealthyPackageRevision\n        Status:                True\n        Type:                  Healthy\n        Last Transition Time:  2022-08-04T16:14:29Z\n        Reason:                ActivePackageRevision\n        Status:                True\n        Type:                  Installed\n    Current Identifier:      crossplane/provider-aws:v0.29.0\n    Current Revision:        provider-aws-a2e16ca2fc1a\nEvents:\n    Type    Reason                  Age                      From                                 Message\n    ----    ------                  ----                     ----                                 -------\n    Normal  InstallPackageRevision  9m49s (x237 over 4d17h)  packages/provider.pkg.crossplane.io  Successfully installed package revision\n
    In the output above we see that this provider is healthy. To get more information about this provider, we can dig deeper. The Current Revision field let us know of our next object to look at.

  2. When you create a provider object, Crossplane will create a ProviderRevision object based on the contents of the OCI image. In this example, we are specifying the OCI image to be crossplane/provider-aws:v0.29.0. This image contains a YAML file which defines many Kubernetes objects such as Deployment, ServiceAccount, and CRDs. The ProviderRevision object creates resources necessary for a provider to function based on the contents of the YAML file. To inspect what is deployed as part of the provider package, we inspect the ProviderRevision object. The Current Revision field above indicates which ProviderRevision object is currently used for this provider.

    # kubectl get providerrevision provider-aws-a2e16ca2fc1a\n\nNAME                        HEALTHY   REVISION   IMAGE                             STATE    DEP-FOUND   DEP-INSTALLED   AGE\nprovider-aws-a2e16ca2fc1a   True      1          crossplane/provider-aws:v0.29.0   Active                               19d\n

    When you describe the object, you will find that many objects are managed by this same object.

    # kubectl describe providerrevision provider-aws-a2e16ca2fc1a\n\nStatus:\n    Controller Ref:\n        Name:  provider-aws-a2e16ca2fc1a\n    Object Refs:\n        API Version:  apiextensions.k8s.io/v1\n        Kind:         CustomResourceDefinition\n        Name:         natgateways.ec2.aws.crossplane.io\n        UID:          5c36d1bc-61b8-44f8-bca0-47e368af87a9\n        ....\nEvents:\n    Type    Reason             Age                    From                                         Message\n    ----    ------             ----                   ----                                         -------\n    Normal  SyncPackage        22m (x369 over 4d18h)  packages/providerrevision.pkg.crossplane.io  Successfully configured package revision\n    Normal  BindClusterRole    15m (x348 over 4d18h)  rbac/providerrevision.pkg.crossplane.io      Bound system ClusterRole to provider ServiceAccount(s)\n    Normal  ApplyClusterRoles  15m (x364 over 4d18h)  rbac/providerrevision.pkg.crossplane.io      Applied RBAC ClusterRoles\n

    The event field will also indicate any issues that may have occurred during this process. 3. If you do not see any errors in the event field above, you should check if deployments and pods were provisioned successfully. As a part of the provider configuration process, a deployment is created:

    # kubectl get deployment -n crossplane-system\n\nNAME                        READY   UP-TO-DATE   AVAILABLE   AGE\ncrossplane                  1/1     1            1           105d\ncrossplane-rbac-manager     1/1     1            1           105d\nprovider-aws-a2e16ca2fc1a   1/1     1            1           19d\n\n# kubectl get pods -n crossplane-system\nNAME                                         READY   STATUS    RESTARTS   AGE\ncrossplane-54db688c8d-qng6b                  2/2     Running   0          4d19h\ncrossplane-rbac-manager-5776c9fbf4-wn5rj     1/1     Running   0          4d19h\nprovider-aws-a2e16ca2fc1a-776769ccbd-4dqml   1/1     Running   0          4d23h\n
    If there are any pods failing, check its logs and remedy the problem.

"},{"location":"patterns/nested-compositions/","title":"Nested Compositions","text":"

Compositions can be nested within a composition. Take a look at the example-application defined in the compositions/aws-provider/example-application directory. The Composition contains Compositions defined in other directories and creates a DynamoDB table, IAM policies for the table, a Kubernetes service account, and a IAM role for service accounts (IRSA). This pattern is very powerful. It let you define your abstraction based on someone else's prior work.

An example yaml file to deploy this Composition is available at examples/aws-provider/composite-resources/example-application/example-application.yaml.

Install the AWS Compositions and XRDs following the instructions in compositions/README.md

Let\u2019s take a look at how this example application can be deployed.

kubectl create ns example-app\n# namespace/example-app created\n\nkubectl apply -f examples/aws-provider/composite-resources/example-application/example-application.yaml\n# exampleapp.awsblueprints.io/example-application created\n

You can look at the example application object, but it doesn\u2019t tell you much about what is happening. Let\u2019s dig deeper.

# kubectl get exampleapp -n example-app example-application -o=jsonpath='{.spec.resourceRef}'\n{\"apiVersion\":\"awsblueprints.io/v1alpha1\",\"kind\":\"XExampleApp\",\"name\":\"example-application-8x9fr\"}\n
By looking at the spec.resourceRef field, you can see which cluster wide object this object created. Let\u2019s see what resources are created in the cluster wide object.

# kubectl get XExampleApp example-application-8x9fr -o=jsonpath='{.spec.resourceRefs}' | jq\n[\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XDynamoDBTable\",\n    \"name\": \"example-application-8x9fr-svxxg\"\n  },\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XIAMPolicy\",\n    \"name\": \"example-application-8x9fr-w9fgb\"\n  },\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XIAMPolicy\",\n    \"name\": \"example-application-8x9fr-r5hzx\"\n  },\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XIRSA\",\n    \"name\": \"example-application-8x9fr-r7dzn\"\n  },\n  {\n    \"apiVersion\": \"kubernetes.crossplane.io/v1alpha1\",\n    \"kind\": \"Object\",\n    \"name\": \"example-application-8x9fr-bv7tl\"\n  }\n]\n

We see that it has five sub objects. Notice the first object is the XDynamoDBTable kind. This application Composition contains the DynamoDB table Composition. In fact, four out of five sub objects in the above output are Compositions.

Let\u2019s take a look at the XIRSA object. As the name implies, this object is responsible for setting up EKS IRSA for the application pod to use.

# kubectl get XIRSA example-application-8x9fr-r7dzn -o jsonpath='{.spec.resourceRefs}' | jq\n[\n  {\n    \"apiVersion\": \"iam.aws.crossplane.io/v1beta1\",\n    \"kind\": \"Role\",\n    \"name\": \"example-application-8x9fr-nwgbh\"\n  },\n  {\n    \"apiVersion\": \"iam.aws.crossplane.io/v1beta1\",\n    \"kind\": \"RolePolicyAttachment\",\n    \"name\": \"example-application-8x9fr-n6g8q\"\n  },\n  {\n    \"apiVersion\": \"iam.aws.crossplane.io/v1beta1\",\n    \"kind\": \"RolePolicyAttachment\",\n    \"name\": \"example-application-8x9fr-kzrsg\"\n  },\n  {\n    \"apiVersion\": \"kubernetes.crossplane.io/v1alpha1\",\n    \"kind\": \"Object\",\n    \"name\": \"example-application-8x9fr-bzfr6\"\n  }\n]\n

As you can see, it created an IAM Role and attached policies. It also created a Kubernetes service account as represented by the last element. If you look at the created service account, it has the necessary properties for IRSA to function.

# kubectl get sa -n example-app example-app -o yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/example-application-8x9fr-nwgbh\n
You can examine the IAM Role as well.

# aws iam list-roles --query 'Roles[?starts_with(RoleName, `example-application`) == `true`]'\n[\n    {\n        \"Path\": \"/\",\n        \"RoleName\": \"example-application-8x9fr-nwgbh\",\n        \"Arn\": \"arn:aws:iam::1234569091:role/example-application-8x9fr-nwgbh\",\n        \"AssumeRolePolicyDocument\": {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Effect\": \"Allow\",\n                    \"Principal\": {\n                        \"Federated\": \"arn:aws:iam::1234569091:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/12345919291AVBD\"\n                    },\n                    \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n                    \"Condition\": {\n                        \"StringEquals\": {\n                            \"oidc.eks.us-west-2.amazonaws.com/id/abcd12345:sub\": \"system:serviceaccount:example-app:example-app\"\n                        }\n                    }\n                }\n            ]\n        },\n        \"MaxSessionDuration\": 3600\n    }\n] \n
"},{"location":"patterns/patching-101/","title":"Patching 101","text":""},{"location":"patterns/patching-101/#crossplane-patching-basics","title":"Crossplane Patching Basics","text":""},{"location":"patterns/patching-101/#component-relationships","title":"Component relationships","text":"
flowchart LR \n\nXRD(Composite Resource Definition)\nC(Composition)\nCR(Composite Resource)\nMR(Managed Resource)\nMR2(Managed Resource)\nClaim\nBucket(S3 Bucket)\nTable(DynamoDB Table)\n\nC --satisfies--> XRD\nXRD --define schema \\n create CRDs--> CRDs\nC --defines--> CR --> MR --managed--> Bucket\nCR --> MR2 --manage--> Table\nClaim --trigger instantiation--> CR\n
"},{"location":"patterns/patching-101/#from-composite-resource-to-managed-resource","title":"From Composite Resource to Managed Resource","text":"

Crossplane compositions allow you to modify sub resources based on arbitrary fields from their composite resource. This type of patches are referred as FromCompositeFieldPath. Take for an example:

type: FromCompositeFieldPath\nfromFieldPath: spec.region\ntoFieldPath: spec.forProvider.region\n

This tells Crossplane to: 1. Look at the spec.region field in the Composite Resource. 2. Then copy that value into the spec.forProvider.region field in this instance of managed resource.

flowchart LR\n\nsubgraph Composite Resource\n    cs(spec: \\n&nbsp region: <font color=red>us-west-2</font>)\nend\n\nsubgraph Managed Resource\n    ms(spec: \\n&nbsp forProvider: \\n&nbsp&nbsp region: <font color=red>us-west-2</font>)\nend\n\nstyle cs text-align:left\nstyle ms text-align:left \n\ncs --> ms\n
"},{"location":"patterns/patching-101/#from-managed-resource-to-composite-resource","title":"From Managed Resource to Composite Resource","text":"

Compositions also allow you to modify the composite resource from its sub resources. For example:

type: ToCompositeFieldPath\nfromFieldPath: status.atProvider.arn\ntoFieldPath: status.bucketArn\npolicy:\n  fromFieldPath: Optional # This can be omitted since it defaults to Optional.\n

This tells Crossplane to: 1. Look at the status.atProvider.arn field on the managed resource. 2. If the status.atProvider.arn field is empty, skip this patch. 3. Copy the value into the status.bucketArn field on the composite resource.

flowchart LR\n\nsubgraph Managed Resource\n    ms(status: \\n&nbsp atProvider: \\n&nbsp&nbsp arn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nsubgraph Composite Resource\n    cs(status: \\n&nbsp bucketArn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\n\nstyle cs text-align:left\nstyle ms text-align:left\n\nms --> cs\n
"},{"location":"patterns/patching-101/#putting-them-together","title":"Putting them together","text":"

With these patching methods together, you can pass values between managed resources.

type: FromCompositeFieldPath\nfromFieldPath: status.bucketArn\ntoFieldPath: spec.forProvider.bucketArn\npolicy:\n  fromFieldPath: Required\n

This tells Crossplane to: 1. Look at the status.bucketArn field in the Composite Resource. 2. If the status.bucketArn field is empty, do not skip. Stop composing this managed resource. 3. Once the status.bucketArn field is filled with a value, copy that value into the spec.forProvider.bucketArn in the managed resource.

With the use of Required policy, you can create a soft dependency. This is useful when you do not want to create a resource before another resource is ready.

flowchart LR\n\nsubgraph Managed Resource 1\n    ms(status: \\n&nbsp atProvider: \\n&nbsp&nbsp arn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nsubgraph Managed Resource 2\n    ms2(spec: \\n&nbsp forProvider: \\n&nbsp&nbsp bucketArn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nsubgraph Composite Resource\n    cs(status: \\n&nbsp bucketArn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nstyle cs text-align:left\nstyle ms text-align:left\nstyle ms2 text-align:left\n\nms --> cs\ncs --> ms2\n
"},{"location":"patterns/patching-101/#transform","title":"Transform","text":"

You can also perform modifications to values when patching. For example, you can use the following transformation to extract the accountId of this managed policy.

type: ToCompositeFieldPath\nfromFieldPath: status.policyArn\ntoFieldPath: status.accountId\ntransforms:\n  - type: string\n    string:\n      type: Regexp\n      regexp:\n        match: 'arn:aws:iam::(\\d+):.*'\n        group: 1\n

This tells Crossplane to: 1. Look at the status.policyArn field in the Managed Resource. 2. If the field has a value, take that value and run a regular expression match against it. 3. When there is a match, take the first capture group and store it in the status.accountId field in the Composite Resource.

flowchart LR\n\nsubgraph Managed Resource\n    ms(status: \\n&nbsp policyArn: arn:aws:iam::<font color=red>12345</font>:policy/my-policy)\nend\n\nsubgraph Composite Resource\n    cs(status: \\n&nbsp accountId: <font color=red>12345</font>)\nend\n\nstyle cs text-align:left\nstyle ms text-align:left\n\nms --regular expression match--> cs\n
"},{"location":"patterns/patching-101/#reference","title":"Reference","text":"

See the official documentation for more information. https://docs.crossplane.io/master/concepts/composition/#patch-types

"},{"location":"patterns/rds-day-2/","title":"RDS day 2 operations","text":""},{"location":"patterns/rds-day-2/#background-and-problem-statement","title":"Background and problem statement","text":"

Managing databases can be challenging because they are stateful, not easily replaceable, and data loss could have significant business impacts. An unexpected restart could cause havoc to applications that depend on them. Because of this, database users and administrators want to offload the management, maintenance, and availability of databases to another entity such as cloud providers. Amazon RDS is one of such services. Crossplane AWS provider aims to create building blocks for self-service experience for developers by providing abilities to manage AWS resources in Kubernetes native ways.

In Amazon RDS some operations require an instance restart. For example, version upgrade and storage size modification require an instance restart. RDS attempts to minimize impact of such operations by: 1. Define a scheduled maintenance window. 2. Queue changes that you want to make. Note that these changes may not need restarts. 3. During the next scheduled maintenance window, changes are applied.

This approach is fundamentally different from GitOps. In GitOps, when a change is checked into your repository, it is expected that actual resources are to match the specifications provided in the repository.

RDS supports applying these changes immediately instead of waiting for a scheduled maintenance window, and when using Crossplane AWS providers, they have the option to apply changes immediately as well. This is the option that should be used when using RDS with GitOps. However this leads to problems when enabling self service model where developers can provision resources on their own.

There are some problems when using the apply immediately option. - Updates made to certain fields would need a restart to take effect but this information may not be surfaced back to users. For example, changing the parameter group on an instance requires a restart but this information is not available in the Upbound Official provider. The community provider surface this information in a status field. In both providers, the status fields indicates Available and ReconcileSuccess. This could give end users an illusion of successful parameter changes, but in reality it has not taken effect yet. - Some field changes triggers an instance restart. For example, changing the instance class triggers a restart and potentially cause an outage. Developers may not know which fields would cause restarts because they are not familiar with underlying technologies. You could document potentially dangerous fields, but it is not enough to reliably stop it from happening.

The main goal of this document is to provide guidance on how to provide guardrails for end users when managing RDS resources through Crossplane.

"},{"location":"patterns/rds-day-2/#parameter-groups","title":"Parameter Groups","text":"

Parameter Groups define how the underlying database engine is configured. For example, if you wish to change the binlog_cache_size configuration value for your MySQL database, you can do that through parameter groups. A parameter group is not limited to be used by a single RDS instance. A parameter group can be used by multiple RDS instances.

In Parameter Groups, there are two types of parameters: dynamic and static. Dynamic parameters do not require a restart for their values to be applied to the running instance / cluster. Static parameters require a restart for their values to be applied. Additionally, dynamic parameters support specifying how changes to them are applied. When immediate is specified the changes to dynamic parameters are applied immediately. When pending-reboot is specified, the changes to dynamic parameters are applied during next restart or during the next maintenance window, whichever is earlier.

Since static parameters do not support immediate apply option, specifying this in your composition could lead to some unexpected errors. Therefore, extra care should be taken when exposing this resource to your end users. End users may not be aware of underlying engine specifications.

Summarizing everything above effectively means there are a few general approaches to managing RDS configuration changes.

  1. You want to ensure that parameter group values in the running cluster / instance match what is defined in your Git repository with no delay. The only certain way do this is by restarting the cluster/ instance during the reconciliation process.
  2. You can wait for parameter group changes to be applied during the next maintenance window. This means you may need to wait maximum 7 days for the changes to be applied.
  3. The change does not have to be applied immediately but it needs to happen sooner than 7 days. This requires a separate workflow to restart cluster / instance.
  4. Use the RDS Blue Green deployment feature.

For reference, problems encountered during parameter group updates in ACK and Terraform are discussed in this issue and this blog post.

"},{"location":"patterns/rds-day-2/#solutions","title":"Solutions","text":""},{"location":"patterns/rds-day-2/#considerations","title":"Considerations","text":"

As of writing this doc, there are 9 fields that require a restart to take effect when using a single RDS instance. There are 3 fields that require a restart when using multi-AZ instances. Unfortunately there is no native way to get these fields programmatically.

There are 188 static parameters in mysql8.0 family, and similar number of them are in other parameter group families as well. You can get a list of static parameters by using the aws rds describe-engine-default-parameters command.

These fields and parameters need to be stored for use by whatever check mechanism you choose, and they need to be updated regularly.

It is also worth pointing out that when a user updates a parameter in a parameter group, the changes to parameter groups themselves usually work without problems. However, it is often not the intention of these changes. The intention of changes is to change the parameter and apply it to a running instance. In both providers, changes to static parameters are not actually applied until the next maintenance window or a manual restart is issued.

We will discuss a few approaches to this problem below. Whichever approach you choose, it is important for the check mechanisms to work reliably. It's easy to lose users' trust when checks say there will be a restart but no restart happened. Or worse, checks did not detect potential restarts and caused an outage.

"},{"location":"patterns/rds-day-2/#check-during-pr","title":"Check during PR","text":"

Use Pull Request as a checkpoint and ensure developers are aware of potential consequences of the changes. An example process may look something like the following.

flowchart TD\n    Comment(Comment on PR)\n\n    subgraph Workflow\n        GetChangedFiles(Get changed files)\n        GetChangedFiles(Get changed files)\n        StepCheck(Will this cause a restart?)\n    end\n\n    subgraph Data Source \n        FieldDefinitions(Fields that need restarting)\n    end \n\n    Restart(Restart immediately)\n\n    FieldDefinitions <--reference--> StepCheck\n\n    PR(PR Created) --trigger--> GetChangedFiles --> StepCheck --yes--> Comment --> Approval(Wait for Approval) --> Merge --> GitOps(GitOps Tooling)\n    StepCheck--no--> Approval\n    GitOps --apply changes now --> Restart --> Done\n    GitOps --wait until next \\n maintenance window--> Done\n

In this example, whenever a pull request is created, a workflow is executed and a comment is created on the PR warning the developers of potential impacts. When developers approve the PR, it implies that they are aware of consequences. To check if a PR is impacted, you can use of the following options: - Parse git diff and search for changes to \"dangerous\" fields - Use kubectl diff then look for changes to \"dangerous\" fields. This requires read access to the target cluster but more accurate.

"},{"location":"patterns/rds-day-2/#check-at-runtime","title":"Check at runtime","text":"

Another approach is to deny such operation at runtime using a policy engine and/or custom validating web hook unless certain conditions are met. This means problems with RDS configuration is communicated to the developers through their GitOps tooling by providing reasons for denial. Note that it is a good idea to check at runtime even if you have a check during PR.

"},{"location":"patterns/rds-day-2/#example-1","title":"Example 1","text":"
flowchart LR\n    subgraph Kubernetes\n        ValidatingController(Policy Engine / Validating controller)\n    end \n\n    subgraph Git \n        PR(PR Merged)\n    end\n\n    subgraph Ticketing\n       Approved(Approved Changes)\n    end\n\n    GitOps(GitOps tooling)\n\n    PR(PR Merged) --> GitOps --> ValidatingController\n    ValidatingController --check--> Ticketing\n    ValidatingController --deny and provide reason--> GitOps\n    ValidatingController --Once Approved--> Restart\n

In the example above, no check is performed during PR. During admission into the Kubernetes cluster, a validating controller will reach out to the ticketing system and verify if this change is approved. If no ticket associated with this change is approved, it's rejected with provided reason.

Note that ticketing system here is just an example. It can be any type of systems that provides a decision.

"},{"location":"patterns/rds-day-2/#example-2","title":"Example 2","text":"

flowchart LR\n    subgraph Kubernetes\n        ConfigMap(ConfigMap w/ ticket numbers)\n        ValidatingController(Policy Engine / Validating controller)\n    end \n\n    subgraph Git\n        subgraph PR\n            Claim\n            Manifests(Other Manifests)\n        end\n    end\n\n    subgraph Ticketing\n       Approved(Approved Changes)\n    end\n\n    User\n    GitOps(GitOps tooling)\n\n    User --Create Ticket--> Ticketing\n    User --Annoate with ticket number--> Claim\n    PR(PR Merged) --> GitOps --> ValidatingController\n    ValidatingController --reference--> ConfigMap\n    ValidatingController --deny if not approved \\n and provide reason--> GitOps\n    Approved --create when the ticket \\n is approved--> ConfigMap\n    ValidatingController--Once Approved--> Restart\n
In this example, developer creates a ticket in the ticketing system and annotates the infrastructure claim with the ticket number. The admission controller checks if the change affects fields that require approval. If approval is required, the change is denied until the ticket is approved and the reason is given back to the GitOps tooling.

Once the ticket is approved, a config map is created with the ticket number as its name or as one of annotations. Next time the GitOps tooling attempts to apply manifests, the admission controller sees the ConfigMap is now created and allows it to be deployed. Once it is deployed, the ConfigMap can be marked for deletion. In this approach, there is no need for read access to the ticketing system.

"},{"location":"patterns/rds-day-2/#blue-green-deployment","title":"Blue Green deployment","text":"

RDS added native support for blue green deployment. This allows for safer database updates because RDS manages the process of creating an alternate instance, copying data over to it, and shifting traffic to it.

As of writing this doc, neither providers support this functionality. Because the functionality is available in Terraform, the Upbound official provider should be able to support this in the future. In addition, this functionality is supported for MariaDB and MySQL only.

"},{"location":"patterns/rds-day-2/#break-glass-scenarios","title":"Break glass scenarios","text":"

In case of an emergency where something unexpected ocurred and you need to stop providers from making changes to AWS resources, you can use one of the following methods: - To prevent providers from making changes to a specific resource, you can use the crossplane.io/paused annotation. e.g.

kubectl annotate instance.rds.aws.upbound.io my-instance crossplane.io/paused=true\n
- To prevent providers from making changes to ALL of your resources, you can update the number of replicas in ControllerConfig to 0. This will terminate the running pod. e.g.
apiVersion: pkg.crossplane.io/v1alpha1\nkind: ControllerConfig\nspec:\n  replicas: 0 # This value is usually 1. \n
- If you cannot access the cluster, you can prevent providers from making changes to all or some of your resources by either removing the policy associated with the IAM role or adjusting the policy to allow it to make changes to certain resources only.

"},{"location":"patterns/rds-day-2/#references","title":"References","text":"

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/modify-multi-az-db-cluster.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#blue_green_update

"},{"location":"patterns/vault-integration/","title":"Overview","text":""},{"location":"patterns/vault-integration/#goals","title":"Goals","text":"

In this doc, we will configure the following: - A Vault server (in-cluster or outside cluster) - A Crossplane installation with AWS provider on EKS - Provision a S3 bucket through Crossplane. - Publish bucket information as a Vault secret. - Access the published information in Vault from a pod using Vault Agent Injector

"},{"location":"patterns/vault-integration/#prerequisites","title":"Prerequisites","text":"

Following command line tools: - kubectl - helm - eksctl - aws

Note: - As of Crossplane 1.9.0, the support for external secret store is still in alpha state and may go under changes. - This assumes a use case for single-cluster multi-tenant. However, the underlying concepts discussed here should be applicable to multi-cluster setup as well. - This doc is based on the excellent external vault configuration guide. Please check these guides out for more detailed information.

"},{"location":"patterns/vault-integration/#procedure","title":"Procedure","text":""},{"location":"patterns/vault-integration/#provision-a-eks-cluster","title":"Provision a EKS cluster","text":"
# from this repository root\neksctl create cluster -f bootstrap/eksctl/eksctl.yaml\n
"},{"location":"patterns/vault-integration/#create-a-vault-service","title":"Create a Vault service","text":"

You can create a vault service in the same cluster as Crossplane or create a service on a VM.

"},{"location":"patterns/vault-integration/#in-cluster","title":"In-cluster","text":"

Follow: https://docs.crossplane.io/latest/guides/vault-as-secret-store/

"},{"location":"patterns/vault-integration/#on-an-external-vm","title":"On an external VM","text":"

This VM must be reachable by the Crossplane installation. If you are using an EC2 instance, routing, network ACL, and Security Groups must be configured to allow for traffic from Crossplane pod to the VM.

Commands below assumes the VM is an Ubuntu instance.

"},{"location":"patterns/vault-integration/#install-vault","title":"Install Vault","text":"

Run the following commands in your VM.

Install vault on Ubuntu following the vault docs

Configure vault

sudo systemctl enable vault.service\n\n# create a configuration file for vault. NOTE: this creates a vault service with TLS disabled. \n# This is done to make the configuration step easy to follow only. TLS should be enabled for real workloads.\ncat <<< 'ui = true\n\nstorage \"file\" {\n  path = \"/opt/vault/data\"\n}\n\nlistener \"tcp\" {\n  address = \"0.0.0.0:8200\"\n  tls_disable = 1\n}' | sudo -u vault tee /etc/vault.d/vault.hcl > /dev/null\n\nsudo systemctl start vault.service\n\nexport VAULT_ADDR='http://127.0.0.1:8200'\n# This command will print out unseal keys and the root token.\nvault operator init\nvault operator unseal # do this three times. each time with a different unseal key.\nvault secrets enable -path=secret kv-v2\nvault auth enable kubernetes\n

Get the IP address of this instance. For an EC2 instance, it should be the private IP of the instance. For a simple EC2 instance:

aws ec2 describe-instances \\\n--filters Name=instance-id,Values=<INSERT_INSTANCE_ID_HERE> \\\n| jq \".Reservations[0].Instances[0].NetworkInterfaces[0].PrivateIpAddress\"\n

"},{"location":"patterns/vault-integration/#install-vault-agent-sidecar-injector","title":"Install Vault Agent Sidecar Injector","text":"

Rut the following commands from a place where you have access to your Kubernetes cluster, e.g. your laptop. The Vault Agent Sidecar injector looks for CREATE and UPDATE events, then it will inject vault secret into the containers.

kubectl create ns vault-system\n# install vault injector. be sure to use the IP address obtained above.\nhelm -n vault-system install vault hashicorp/vault \\\n    --set \"injector.externalVaultAddr=http://<PRIVATE_IP_ADDRESS>:8200\"\n\nTOKEN_REVIEW_JWJ=$(kubectl -n vault-system get secret $(kubectl -n vault-system get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith(\"vault-token-\")).name') --output='go-template={{ .data.token }}' | base64 --decode)\nKUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')\nKUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)\nISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)\n

Configure Kubernetes authentication, policy, and role for Crossplane to use in your VM:

vault write auth/kubernetes/config \\\n     token_reviewer_jwt=\"$TOKEN_REVIEW_JWT\" \\\n     kubernetes_host=\"$KUBE_HOST\" \\\n     kubernetes_ca_cert=\"$KUBE_CA_CERT\" \\\n     issuer=$ISSUER\n\nvault policy write crossplane - <<EOF\npath \"secret/data/crossplane-system*\" {\n    capabilities = [\"create\", \"read\", \"update\", \"delete\"]\n}\npath \"secret/metadata/crossplane-system*\" {\n    capabilities = [\"create\", \"read\", \"update\", \"delete\"]\n}\nEOF\n\nvault write auth/kubernetes/role/crossplane \\\n    bound_service_account_names=\"*\" \\\n    bound_service_account_namespaces=crossplane-system \\\n    policies=crossplane \\\n    ttl=24h\n
"},{"location":"patterns/vault-integration/#configure-vault","title":"Configure Vault","text":"

For our test cases to work, we need to configure additional Vault policy and role. Run the following commands in your vault pod or VM.

# {% raw %}\n# create policy and role for applications to use.\nACCESSOR=$(vault auth list | grep kubernetes | tr -s ' ' | cut -d ' ' -f3)\n\nvault policy write k8s-application - << EOF\npath \"secret/data/crossplane-system/{{identity.entity.aliases.${ACCESSOR}.metadata.service_account_namespace}}/*\" {\n  capabilities = [\"read\", \"list\"]\n}\npath \"secret/metadata/crossplane-system/{{identity.entity.aliases.${ACCESSOR}.metadata.service_account_namespace}}/*\" {\n  capabilities = [\"read\", \"list\"]\n}\nEOF\n\nvault write auth/kubernetes/role/k8s-application \\\n    bound_service_account_names=\"*\" \\\n    bound_service_account_namespaces=\"*\" \\\n    policies=k8s-application \\\n    ttl=1h\n\n# {% endraw %}\n
"},{"location":"patterns/vault-integration/#install-and-configure-crossplane","title":"Install and configure Crossplane","text":"

Crossplane must be configured with external secret store support. In addition, the Crossplane pod must have access to the vault token.

kubectl create ns crossplane-system\nhelm upgrade --install crossplane crossplane-stable/crossplane --namespace crossplane-system \\\n  --version 1.10.0 \\\n  --set 'args={--enable-external-secret-stores}' \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/agent-inject\"=true \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/agent-inject-token\"=true \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/role\"=crossplane \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/agent-run-as-user\"=65532\n

Once Crossplane is installed, install its AWS provider.

Update the AWS provider YAML file with your role ARN, then execute the following commands.

kubectl apply -f bootstrap/eksctl/crossplane/aws-provider-vault-secret.yaml\nkubectl get ProviderRevision\n# example output\n# NAME                        HEALTHY   REVISION   IMAGE                             STATE    DEP-FOUND   DEP-INSTALLED   AGE\n# provider-aws-a2e16ca2fc1a   True      1          crossplane/provider-aws:v0.29.0   Active                               23s\n

StoreConfig objects provides Crossplane and its providers information about how to connect to secret stores. These objects must be configured for external secret integrations to work.

Update the store config YAML file with your endpoint information. If you configured vault outside of the cluster, it should be the private IP address. e.g. 10.0.0.1:8200

kubectl apply -f bootstrap/eksctl/crossplane/store-config-vault.yaml\n\necho \"apiVersion: aws.crossplane.io/v1beta1\nkind: ProviderConfig\nmetadata:\n  name: application1-provider-config\nspec:\n  credentials:\n    source: InjectedIdentity\" | kubectl apply -f - \n

This creates two configurations for secrets stores: - A configuration named in-cluster for Crossplane (compositions). This tells Crossplane to store composition secrets in the same cluster as Kubernetes secrets. - Another configuration named vault for AWS provider. This tells the provider to store secrets the vault instance under the /secret/crossplane-system namespace. To access the vault instance, a token is created by the sidecar at /vault/secrets/token.

"},{"location":"patterns/vault-integration/#create-compositions","title":"Create compositions","text":"

Apply the S3 compositions:

kubectl apply -f compositions/aws-provider/s3\n

The composition that is of interest is compositions/aws-provider/s3/multi-tenant.yaml. This composition demonstrates the following: - ProviderConfig selection based on the claim's namespace. - Publishes bucket information to Kubernetes secrets and Vault. - Published Vault secrets are created under the claim's namespace in Vault.

"},{"location":"patterns/vault-integration/#test-compositions","title":"Test compositions","text":"

Try creating a bucket claim in the default namespace

kubectl apply -f examples/aws-provider/composite-resources/s3/multi-tenant.yaml\n
Then inspect the events for the bucket:
kubectl describe bucket\n# example events\n# Events:\n#  Type     Reason                   Age               From                                 Message\n#  ----     ------                   ----              ----                                 -------\n#  Warning  CannotConnectToProvider  1s (x5 over 14s)  managed/bucket.s3.aws.crossplane.io  cannot get referenced Provider: ProviderConfig.aws.crossplane.io \"default-provider-config\" not found\n
In the claim file, we specify a provider config name. However, this is patched out to use the provider config with name <NAMESPACE>-provider-config. This is why the error message indicates provider config with name default-provider-config is not found.

Since we created a provider config named application1-provider-config, we should be able to create a claim in namespace called application1.

#create namespace\nkubectl create ns application1 || true\n# create in new namespace\nkubectl apply -n application1 -f examples/aws-provider/composite-resources/s3/multi-tenant.yaml\n\nkubectl -n application1 get objectstorage\n# NAME                      READY   CONNECTION-SECRET   AGE\n# standard-object-storage   True                        22s\n

Once the claim reaches the ready state, you should be able to verify. Secret creation:

kubectl -n crossplane-system get secret `kubectl get xobjectstorage -o json | jq -r '.items[0].metadata.uid'` -o go-template='{{range $k,$v := .data}}{{printf \"%s: \" $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{\"\\n\"}}{{end}}'\n# example output\n# bucket-name: standard-object-storage-qlgvz-hz2dn\n# region: us-west-2\n

The same information should be available in Vault:

# in your vault installation\nvault kv get secret/crossplane-system/application1/dev/bucket\n# ==================== Secret Path ====================\n# secret/data/crossplane-system/application1/dev/bucket\n#\n# ======= Metadata =======\n# Key                Value\n# ---                -----\n# created_time       2022-07-22T20:51:27.852598176Z\n# custom_metadata    map[awsblueprints.io/composition-name:s3bucket-multi-tenant.awsblueprints.io awsblueprints.io/environment:dev awsblueprints.io/provider:aws secret.crossplane.io/owner-uid:0c601153-358d-45e1-8e0a-0f34991bed82]\n# deletion_time      n/a\n# destroyed          false\n# version            1\n#\n# ====== Data ======\n# Key         Value\n# ---         -----\n# endpoint    standard-object-storage-4p2wr-lxb74\n# region      us-west-2\n
"},{"location":"patterns/vault-integration/#test-applications","title":"Test Applications","text":"

Vault sidecar injector can inject secrets into pods. Create an example pod that access the secret created by the sidecar

echo 'apiVersion: v1\nkind: Pod\nmetadata:\n  name: test-pod\n  annotations:\n    vault.hashicorp.com/agent-inject: \"true\"\n    vault.hashicorp.com/role: \"k8s-application\"\n    vault.hashicorp.com/agent-inject-secret-credentials.txt: \"secret/crossplane-system/application1/dev/bucket\"\nspec:\n  containers:\n    - name: busybox\n      image: busybox:1.28\n      command:\n        - sh\n        - -c\n        - echo \"Hello there!\" && cat /vault/secrets/credentials.txt  && sleep 3600' | kubectl apply -f - \n

This will create an pod in the default namespace. However, the pod will not reach the ready state. Check the logs:

kubectl logs  test-pod vault-agent-init\n# URL: GET http://192.168.67.77:8200/v1/secret/data/crossplane-system/application1/dev/bucket\n# Code: 403. Errors:\n\n# * 1 error occurred:\n#   * permission denied\n

This is because the pod is created in the default namespace and the Vault policy we configured earlier does not allow it to access secrets in another namespace.

Try creating the pod in the correct namespace.

echo 'apiVersion: v1\nkind: Pod\nmetadata:\n  name: test-pod\n  namespace: application1\n  annotations:\n    vault.hashicorp.com/agent-inject: \"true\"\n    vault.hashicorp.com/role: \"k8s-application\"\n    vault.hashicorp.com/agent-inject-secret-credentials.txt: \"secret/crossplane-system/application1/dev/bucket\"\nspec:\n  containers:\n    - name: busybox\n      image: busybox:1.28\n      command:\n        - sh\n        - -c\n        - echo \"Hello there!\" && cat /vault/secrets/credentials.txt  && sleep 3600' | kubectl apply -f - \n
The pod should reach ready state.

kubectl -n application1 logs test-pod busybox\n# Hello there!\n# data: map[endpoint:standard-object-storage-qlgvz-hz2dn region:us-west-2]\n# metadata: map[created_time:2022-07-21T21:27:38.82988124Z custom_metadata:map[awsblueprints.io/composition-name:s3bucket-multi-tenant.awsblueprints.io awsblueprints.io/environment:dev awsblueprints.io/provider:aws secret.crossplane.io/owner-uid:5089919f-e80f-4889-80f4-c8e3cacd8fb7] deletion_time: destroyed:false version:1]\n
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":""},{"location":"#blueprints-for-crossplane-on-amazon-eks","title":"Blueprints for Crossplane on Amazon EKS","text":"

Note: AWS Blueprints for Crossplane on Amazon Elastic Kubernetes Service is under active development and should be considered a pre-production framework.

Welcome to the AWS Crossplane Blueprints.

"},{"location":"#introduction","title":"Introduction","text":"

AWS Crossplane Blueprints is an open source repo to bootstrap Amazon Elastic Kubernetes Service Clusters. and provision AWS resources with a library of Crossplane Compositions (XRs) with Composite Resource Definitions (XRDs).

If you are new to Crossplane, it is highly recommended to get yourself familiarized with Crossplane concepts. The official documentation and this blog post are good starting points.

Compositions in this repository enable platform teams to define and offer bespoke AWS infrastructure APIs to the teams of application developers based on predefined Composite Resources (XRs), encompassing one or more of AWS Managed Resources (MRs)

"},{"location":"#features","title":"Features","text":"

\u2705 Bootstrap Amazon EKS Cluster and Crossplane with Terraform \\ \u2705 Bootstrap Amazon EKS Cluster and Crossplane with eksctl \\ \u2705 AWS Provider - Crossplane Compositions for AWS Services \\ \u2705 Upbound AWS Provider - Upbound Crossplane Compositions for AWS Services \\ \u2705 AWS IRSA on EKS - AWS Provider Config with IRSA enabled \\ \u2705 Patching 101 - Learn how patches work. \u2705 Example deployment patterns for Composite Resources (XRs) for AWS Provider\\ \u2705 Example deployment patterns for Crossplane Managed Resources (MRs)

"},{"location":"#getting-started","title":"Getting Started","text":"

\u2705 Bootstrap EKS Cluster

This repo provides multiple options to bootstrap Amazon EKS Clusters with Crossplane and AWS Providers. Checkout the following README for full deployment configuration

\u2705 Configure the EKS cluster

Enable IRSA support for your EKS cluster for the necessary permissions to spin up other AWS services. Depending on the provider, refer to the bootstrap README for this configuration.

\u2705 Deploy the Examples

With the setup complete, you can then follow instructions on deploying crossplane compositions or managed resources you want to experiment with. Keep in mind that the list of compositions and managed resources in this repository are evolving.

\u2705 Work with nested compositions.

Compositions can be nested to further define and abstract application specific needs.

\u2705 Work with external secrets.

Crossplane can be configured to publish secrets external to the cluster in which it runs.

\u2705 Check out the RDS day 2 operation doc

\u2705 Checkout example Gatekeeper configurations.

\u2705 Upbound AWS provider examples

"},{"location":"#learn-more","title":"Learn More","text":""},{"location":"#debugging","title":"Debugging","text":"

For debugging Compositions, CompositionResourceDefinitions, etc, please see the debugging guide.

"},{"location":"#adopters","title":"Adopters","text":"

A list of publicly known users of the Crossplane Blueprints for Amazon EKS project can be found in ADOPTERS.md.

"},{"location":"#security","title":"Security","text":"

See CONTRIBUTING for more information.

"},{"location":"#license","title":"License","text":"

This library is licensed under the Apache 2.0 License.

"},{"location":"faq/","title":"Frequently Asked Questions","text":""},{"location":"faq/#timeouts-on-destroy","title":"Timeouts on destroy","text":"

Customers who are deleting their environments using terraform destroy may see timeout errors when VPCs are being deleted. This is due to a known issue in the vpc-cni

Customers may face a situation where ENIs that were attached to EKS managed nodes (same may apply to self-managed nodes) are not being deleted by the VPC CNI as expected which leads to IaC tool failures, such as:

The current recommendation is to execute cleanup in the following order:

  1. delete all pods that have been created in the cluster.
  2. add delay/ wait
  3. delete VPC CNI
  4. delete nodes
  5. delete cluster
"},{"location":"getting-started/","title":"Getting Started","text":"

This getting started guide will help you bootstrap your first cluster using Crossplane Blueprints.

"},{"location":"getting-started/#prerequisites","title":"Prerequisites","text":"

Ensure that you have installed the following tools locally:

"},{"location":"getting-started/#deploy","title":"Deploy","text":""},{"location":"getting-started/#eksctl","title":"eksctl","text":"
  1. TBD
"},{"location":"getting-started/#terraform","title":"terraform","text":"
  1. For consuming Crossplane Blueprints, please see the Getting Started section. For exploring and trying out the patterns provided, please clone the project locally to quickly get up and running with a pattern. After cloning the project locally, cd into the pattern directory of your choice.

  2. To provision the pattern, the typical steps of execution are as follows:

    terraform init\nterraform apply -target=\"module.vpc\" -auto-approve\nterraform apply -target=\"module.eks\" -auto-approve\nterraform apply -target=\"module.eks_blueprints_addons\" -auto-approve\nterraform apply -target=\"module.crossplane\" -auto-approve\nterraform apply -auto-approve\n
  3. Once all of the resources have successfully been provisioned, the following command can be used to update the kubeconfig on your local machine and allow you to interact with your EKS Cluster using kubectl.

    aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME> --alias <CLUSTER_NAME>\n

    Terraform outputs

    The examples will output the aws eks update-kubeconfig ... command as part of the Terraform apply output to simplify this process for users

  4. Once you have updated your kubeconfig, you can verify that you are able to interact with your cluster by running the following command:

    kubectl get nodes\n

    This should return a list of the node(s) running in the cluster created. If any errors are encountered, please re-trace the steps above and consult the pattern's README.md for more details on any additional/specific steps that may be required.

"},{"location":"getting-started/#destroy","title":"Destroy","text":"

To teardown and remove the resources created in the bootstrap, the typical steps of execution are as follows:

terraform destroy -target=\"module.crossplane\" -auto-approve\nterraform destroy -target=\"module.eks_blueprints_addons\" -auto-approve\nterraform destroy -target=\"module.eks\" -auto-approve\nterraform destroy -target=\"module.vpc\" -auto-approve\nterraform destroy -auto-approve\n

Resources created outside of Terraform

Some resources may have been created that Terraform is not aware of that will cause issues when attempting to clean up the pattern. Please see the destroy.md for more details.

"},{"location":"_partials/destroy/","title":"Destroy","text":"
terraform destroy -target=\"module.crossplane\" -auto-approve\nterraform destroy -target=\"module.gatekeeper\" -auto-approve\nterraform destroy -target=\"module.eks_blueprints_addons\" -auto-approve\nterraform destroy -target=\"module.eks\" -auto-approve\nterraform destroy -target=\"module.vpc\" -auto-approve\nterraform destroy -auto-approve\n
"},{"location":"patterns/debugging/","title":"Debugging CompositeResourceDefinitions (XRD) and Compositions","text":""},{"location":"patterns/debugging/#composite-resources-and-claim-overview","title":"Composite resources and claim overview","text":"
    flowchart LR\n    subgraph \"Some namespace\"\n        direction LR\n        XRC[\"Claim\"]\n    end\n\n    subgraph \"Cluster Scoped\"\n        direction LR\n        XR(\"Composite Resource\")\n        MR1(\"Managed Resource \\n(e.g. RDS instance)\")\n        MR2(\"Managed Resouce \\n(e.g. IAM Role)\")\n    end\n    XR --> |\"spec.resourceRef\"| MR1\n    XR --> |\"spec.resourceRef\"| MR2\n    XRC --> |\"spec.resourceRef\"| XR\n
"},{"location":"patterns/debugging/#general-debugging-steps","title":"General debugging steps","text":"

Most error messages are logged to resources' event field. Whenever your Composite Resources are not getting provisioned, follow the following: 1. Get the events for the root resource using kubectl describe or kubectl get event 2. If there are errors in the events, address them. 3. If no errors, follow its sub-resources. kubectl get <KIND> <NAME> -o=jsonpath='{.spec.resourceRef}{\" \"}{.spec.resourceRefs}' | jq 4. Go back to step 1 using one of resources returned by step 3.

Note: Debugging is also enabled for the AWS provider pods. You may find it useful to check the logs for the provider pods for extra information on failures. You can also disable logging here.

# kubectl get pods -n crossplane-system\nNAME                                                READY   STATUS    RESTARTS   AGE\ncrossplane-5b6896bb4c-mjr8x                         1/1     Running   0          12d\ncrossplane-rbac-manager-7874897d59-fc9wf            1/1     Running   0          12d\nprovider-aws-f6a4a9bdba04-84ddf67474-z78nl          1/1     Running   0          12d\nprovider-kubernetes-cfae2275d58e-6b7bcf5bb5-2rjk2   1/1     Running   0          8d\n\n# For the AWS provider logs\n# kubectl -n crossplane-system logs provider-aws-f6a4a9bdba04-84ddf67474-z78nl | less\n\n# For Crossplane core logs\n# kubectl -n crossplane-system logs crossplane-5b6896bb4c-mjr8x  | less\n
"},{"location":"patterns/debugging/#debugging-example","title":"Debugging Example","text":""},{"location":"patterns/debugging/#composition","title":"Composition","text":"

An example application was deployed as a claim of a composite resource. Kind = ExampleApp. Name = example-application.

The example application never reaches available state.

  1. Run kubectl describe exampleapp example-application

    Status:\nConditions:\n    Last Transition Time:  2022-03-01T22:57:38Z\n    Reason:                Composite resource claim is waiting for composite resource to become Ready\n    Status:                False\n    Type:                  Ready\nEvents:                    <none>\n

  2. No error in events. Find its cluster scoped resource (composite resource).

    # kubectl get exampleapp example-application -o=jsonpath='{.spec.resourceRef}{\" \"}{.spec.resourceRefs}' | jq\n\n{\n  \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n  \"kind\": \"XExampleApp\",\n  \"name\": \"example-application-xqlsz\"\n}\n

  3. In the above output, we see the cluster scoped resource for this claim. Kind = XExampleApp name = example-application-xqlsz
  4. Get the cluster resource's event.
    # kubectl describe xexampleapp example-application-xqlsz\n\nEvents:\nType     Reason                   Age               From                                                             Message\n----     ------                   ----              ----                                                             -------\nNormal   PublishConnectionSecret  9s (x2 over 10s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  Successfully published connection details\nNormal   SelectComposition        6s (x6 over 11s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  Successfully selected composition\nWarning  ComposeResources         6s (x6 over 10s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  cannot render composed resource from resource template at index 3: cannot use dry-run create to name composed resource: an empty namespace may not be set during creation\nNormal   ComposeResources         6s (x6 over 10s)  defined/compositeresourcedefinition.apiextensions.crossplane.io  Successfully composed resources\n
  5. We see errors in the events. It is complaining about not specifying namespace in its compositions. For this particular kind of error, we can get its sub-resources and check which one is not created.

    # kubectl get xexampleapp example-application-xqlsz -o=jsonpath='{.spec.resourceRef}{\" \"}{.spec.resourceRefs}' | jq\n[\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"XDynamoDBTable\",\n        \"name\": \"example-application-xqlsz-6j9nm\"\n    },\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"XIAMPolicy\",\n        \"name\": \"example-application-xqlsz-lp9wt\"\n    },\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"XIAMPolicy\",\n        \"name\": \"example-application-xqlsz-btwkn\"\n    },\n    {\n        \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n        \"kind\": \"IRSA\"\n    }\n]\n
    6. Notice the last element in the array does not have a name. When a resource in composition fails validation, the resource object is not created and will not have a name. For this particular issue, we need to specify the namespace for the IRSA resource.

"},{"location":"patterns/debugging/#composition-definition","title":"Composition Definition","text":"

Debugging Composition Definitions is similar to debugging Compositions.

  1. Get XRD
    # kubectl get xrd testing.awsblueprints.io\nNAME                       ESTABLISHED   OFFERED   AGE\ntesting.awsblueprints.io                           66s\n
  2. Notice its status it not established. We describe this XRD to get its events
    # kubectl describe xrd testing.awsblueprints.io\nEvents:\nType     Reason              Age                    From                                                             Message\n----     ------              ----                   ----                                                             -------\nNormal   ApplyClusterRoles   3m19s (x3 over 3m19s)  rbac/compositeresourcedefinition.apiextensions.crossplane.io     Applied RBAC ClusterRoles\nNormal   RenderCRD           18s (x9 over 3m19s)    defined/compositeresourcedefinition.apiextensions.crossplane.io  Rendered composite resource CustomResourceDefinition\nWarning  EstablishComposite  18s (x9 over 3m19s)    defined/compositeresourcedefinition.apiextensions.crossplane.io  cannot apply rendered composite resource CustomResourceDefinition: cannot create object: CustomResourceDefinition.apiextensions.k8s.io \"testing.awsblueprints.io\" is invalid: metadata.name: Invalid value: \"testing.awsblueprints.io\": must be spec.names.plural+\".\"+spec.group\n
  3. We see in the events that CRD cannot be generated for this XRD. In this case, we need to ensure the name is spec.names.plural+\".\"+spec.group
"},{"location":"patterns/debugging/#providers","title":"Providers","text":"

There are two ways to install providers in Crossplane. Using configuration.pkg.crossplane.io and provider.pkg.crossplane.io. In this repository, we use provider.pkg.crossplane.io. Note that if you define a configuration.pkg.crossplane.io object, Crossplane will create a provider.pkg.crossplane.io object. This object is managed by Crossplane. Please refer to this guide for more information about Crossplane Packages.

If you are experiencing provider issues, steps below are a good starting point.

  1. Check the status of provider object.

    # kubectl describe provider.pkg.crossplane.io provider-aws\nStatus:\n    Conditions:\n        Last Transition Time:  2022-08-04T16:19:44Z\n        Reason:                HealthyPackageRevision\n        Status:                True\n        Type:                  Healthy\n        Last Transition Time:  2022-08-04T16:14:29Z\n        Reason:                ActivePackageRevision\n        Status:                True\n        Type:                  Installed\n    Current Identifier:      crossplane/provider-aws:v0.29.0\n    Current Revision:        provider-aws-a2e16ca2fc1a\nEvents:\n    Type    Reason                  Age                      From                                 Message\n    ----    ------                  ----                     ----                                 -------\n    Normal  InstallPackageRevision  9m49s (x237 over 4d17h)  packages/provider.pkg.crossplane.io  Successfully installed package revision\n
    In the output above we see that this provider is healthy. To get more information about this provider, we can dig deeper. The Current Revision field let us know of our next object to look at.

  2. When you create a provider object, Crossplane will create a ProviderRevision object based on the contents of the OCI image. In this example, we are specifying the OCI image to be crossplane/provider-aws:v0.29.0. This image contains a YAML file which defines many Kubernetes objects such as Deployment, ServiceAccount, and CRDs. The ProviderRevision object creates resources necessary for a provider to function based on the contents of the YAML file. To inspect what is deployed as part of the provider package, we inspect the ProviderRevision object. The Current Revision field above indicates which ProviderRevision object is currently used for this provider.

    # kubectl get providerrevision provider-aws-a2e16ca2fc1a\n\nNAME                        HEALTHY   REVISION   IMAGE                             STATE    DEP-FOUND   DEP-INSTALLED   AGE\nprovider-aws-a2e16ca2fc1a   True      1          crossplane/provider-aws:v0.29.0   Active                               19d\n

    When you describe the object, you will find that many objects are managed by this same object.

    # kubectl describe providerrevision provider-aws-a2e16ca2fc1a\n\nStatus:\n    Controller Ref:\n        Name:  provider-aws-a2e16ca2fc1a\n    Object Refs:\n        API Version:  apiextensions.k8s.io/v1\n        Kind:         CustomResourceDefinition\n        Name:         natgateways.ec2.aws.crossplane.io\n        UID:          5c36d1bc-61b8-44f8-bca0-47e368af87a9\n        ....\nEvents:\n    Type    Reason             Age                    From                                         Message\n    ----    ------             ----                   ----                                         -------\n    Normal  SyncPackage        22m (x369 over 4d18h)  packages/providerrevision.pkg.crossplane.io  Successfully configured package revision\n    Normal  BindClusterRole    15m (x348 over 4d18h)  rbac/providerrevision.pkg.crossplane.io      Bound system ClusterRole to provider ServiceAccount(s)\n    Normal  ApplyClusterRoles  15m (x364 over 4d18h)  rbac/providerrevision.pkg.crossplane.io      Applied RBAC ClusterRoles\n

    The event field will also indicate any issues that may have occurred during this process. 3. If you do not see any errors in the event field above, you should check if deployments and pods were provisioned successfully. As a part of the provider configuration process, a deployment is created:

    # kubectl get deployment -n crossplane-system\n\nNAME                        READY   UP-TO-DATE   AVAILABLE   AGE\ncrossplane                  1/1     1            1           105d\ncrossplane-rbac-manager     1/1     1            1           105d\nprovider-aws-a2e16ca2fc1a   1/1     1            1           19d\n\n# kubectl get pods -n crossplane-system\nNAME                                         READY   STATUS    RESTARTS   AGE\ncrossplane-54db688c8d-qng6b                  2/2     Running   0          4d19h\ncrossplane-rbac-manager-5776c9fbf4-wn5rj     1/1     Running   0          4d19h\nprovider-aws-a2e16ca2fc1a-776769ccbd-4dqml   1/1     Running   0          4d23h\n
    If there are any pods failing, check its logs and remedy the problem.

"},{"location":"patterns/nested-compositions/","title":"Nested Compositions","text":"

Compositions can be nested within a composition. Take a look at the example-application defined in the compositions/aws-provider/example-application directory. The Composition contains Compositions defined in other directories and creates a DynamoDB table, IAM policies for the table, a Kubernetes service account, and a IAM role for service accounts (IRSA). This pattern is very powerful. It let you define your abstraction based on someone else's prior work.

An example yaml file to deploy this Composition is available at examples/aws-provider/composite-resources/example-application/example-application.yaml.

Install the AWS Compositions and XRDs following the instructions in compositions/README.md

Let\u2019s take a look at how this example application can be deployed.

kubectl create ns example-app\n# namespace/example-app created\n\nkubectl apply -f examples/aws-provider/composite-resources/example-application/example-application.yaml\n# exampleapp.awsblueprints.io/example-application created\n

You can look at the example application object, but it doesn\u2019t tell you much about what is happening. Let\u2019s dig deeper.

# kubectl get exampleapp -n example-app example-application -o=jsonpath='{.spec.resourceRef}'\n{\"apiVersion\":\"awsblueprints.io/v1alpha1\",\"kind\":\"XExampleApp\",\"name\":\"example-application-8x9fr\"}\n
By looking at the spec.resourceRef field, you can see which cluster wide object this object created. Let\u2019s see what resources are created in the cluster wide object.

# kubectl get XExampleApp example-application-8x9fr -o=jsonpath='{.spec.resourceRefs}' | jq\n[\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XDynamoDBTable\",\n    \"name\": \"example-application-8x9fr-svxxg\"\n  },\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XIAMPolicy\",\n    \"name\": \"example-application-8x9fr-w9fgb\"\n  },\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XIAMPolicy\",\n    \"name\": \"example-application-8x9fr-r5hzx\"\n  },\n  {\n    \"apiVersion\": \"awsblueprints.io/v1alpha1\",\n    \"kind\": \"XIRSA\",\n    \"name\": \"example-application-8x9fr-r7dzn\"\n  },\n  {\n    \"apiVersion\": \"kubernetes.crossplane.io/v1alpha1\",\n    \"kind\": \"Object\",\n    \"name\": \"example-application-8x9fr-bv7tl\"\n  }\n]\n

We see that it has five sub objects. Notice the first object is the XDynamoDBTable kind. This application Composition contains the DynamoDB table Composition. In fact, four out of five sub objects in the above output are Compositions.

Let\u2019s take a look at the XIRSA object. As the name implies, this object is responsible for setting up EKS IRSA for the application pod to use.

# kubectl get XIRSA example-application-8x9fr-r7dzn -o jsonpath='{.spec.resourceRefs}' | jq\n[\n  {\n    \"apiVersion\": \"iam.aws.crossplane.io/v1beta1\",\n    \"kind\": \"Role\",\n    \"name\": \"example-application-8x9fr-nwgbh\"\n  },\n  {\n    \"apiVersion\": \"iam.aws.crossplane.io/v1beta1\",\n    \"kind\": \"RolePolicyAttachment\",\n    \"name\": \"example-application-8x9fr-n6g8q\"\n  },\n  {\n    \"apiVersion\": \"iam.aws.crossplane.io/v1beta1\",\n    \"kind\": \"RolePolicyAttachment\",\n    \"name\": \"example-application-8x9fr-kzrsg\"\n  },\n  {\n    \"apiVersion\": \"kubernetes.crossplane.io/v1alpha1\",\n    \"kind\": \"Object\",\n    \"name\": \"example-application-8x9fr-bzfr6\"\n  }\n]\n

As you can see, it created an IAM Role and attached policies. It also created a Kubernetes service account as represented by the last element. If you look at the created service account, it has the necessary properties for IRSA to function.

# kubectl get sa -n example-app example-app -o yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/example-application-8x9fr-nwgbh\n
You can examine the IAM Role as well.

# aws iam list-roles --query 'Roles[?starts_with(RoleName, `example-application`) == `true`]'\n[\n    {\n        \"Path\": \"/\",\n        \"RoleName\": \"example-application-8x9fr-nwgbh\",\n        \"Arn\": \"arn:aws:iam::1234569091:role/example-application-8x9fr-nwgbh\",\n        \"AssumeRolePolicyDocument\": {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Effect\": \"Allow\",\n                    \"Principal\": {\n                        \"Federated\": \"arn:aws:iam::1234569091:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/12345919291AVBD\"\n                    },\n                    \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n                    \"Condition\": {\n                        \"StringEquals\": {\n                            \"oidc.eks.us-west-2.amazonaws.com/id/abcd12345:sub\": \"system:serviceaccount:example-app:example-app\"\n                        }\n                    }\n                }\n            ]\n        },\n        \"MaxSessionDuration\": 3600\n    }\n] \n
"},{"location":"patterns/patching-101/","title":"Patching 101","text":""},{"location":"patterns/patching-101/#crossplane-patching-basics","title":"Crossplane Patching Basics","text":""},{"location":"patterns/patching-101/#component-relationships","title":"Component relationships","text":"
flowchart LR \n\nXRD(Composite Resource Definition)\nC(Composition)\nCR(Composite Resource)\nMR(Managed Resource)\nMR2(Managed Resource)\nClaim\nBucket(S3 Bucket)\nTable(DynamoDB Table)\n\nC --satisfies--> XRD\nXRD --define schema \\n create CRDs--> CRDs\nC --defines--> CR --> MR --managed--> Bucket\nCR --> MR2 --manage--> Table\nClaim --trigger instantiation--> CR\n
"},{"location":"patterns/patching-101/#from-composite-resource-to-managed-resource","title":"From Composite Resource to Managed Resource","text":"

Crossplane compositions allow you to modify sub resources based on arbitrary fields from their composite resource. This type of patches are referred as FromCompositeFieldPath. Take for an example:

type: FromCompositeFieldPath\nfromFieldPath: spec.region\ntoFieldPath: spec.forProvider.region\n

This tells Crossplane to: 1. Look at the spec.region field in the Composite Resource. 2. Then copy that value into the spec.forProvider.region field in this instance of managed resource.

flowchart LR\n\nsubgraph Composite Resource\n    cs(spec: \\n&nbsp region: <font color=red>us-west-2</font>)\nend\n\nsubgraph Managed Resource\n    ms(spec: \\n&nbsp forProvider: \\n&nbsp&nbsp region: <font color=red>us-west-2</font>)\nend\n\nstyle cs text-align:left\nstyle ms text-align:left \n\ncs --> ms\n
"},{"location":"patterns/patching-101/#from-managed-resource-to-composite-resource","title":"From Managed Resource to Composite Resource","text":"

Compositions also allow you to modify the composite resource from its sub resources. For example:

type: ToCompositeFieldPath\nfromFieldPath: status.atProvider.arn\ntoFieldPath: status.bucketArn\npolicy:\n  fromFieldPath: Optional # This can be omitted since it defaults to Optional.\n

This tells Crossplane to: 1. Look at the status.atProvider.arn field on the managed resource. 2. If the status.atProvider.arn field is empty, skip this patch. 3. Copy the value into the status.bucketArn field on the composite resource.

flowchart LR\n\nsubgraph Managed Resource\n    ms(status: \\n&nbsp atProvider: \\n&nbsp&nbsp arn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nsubgraph Composite Resource\n    cs(status: \\n&nbsp bucketArn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\n\nstyle cs text-align:left\nstyle ms text-align:left\n\nms --> cs\n
"},{"location":"patterns/patching-101/#putting-them-together","title":"Putting them together","text":"

With these patching methods together, you can pass values between managed resources.

type: FromCompositeFieldPath\nfromFieldPath: status.bucketArn\ntoFieldPath: spec.forProvider.bucketArn\npolicy:\n  fromFieldPath: Required\n

This tells Crossplane to: 1. Look at the status.bucketArn field in the Composite Resource. 2. If the status.bucketArn field is empty, do not skip. Stop composing this managed resource. 3. Once the status.bucketArn field is filled with a value, copy that value into the spec.forProvider.bucketArn in the managed resource.

With the use of Required policy, you can create a soft dependency. This is useful when you do not want to create a resource before another resource is ready.

flowchart LR\n\nsubgraph Managed Resource 1\n    ms(status: \\n&nbsp atProvider: \\n&nbsp&nbsp arn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nsubgraph Managed Resource 2\n    ms2(spec: \\n&nbsp forProvider: \\n&nbsp&nbsp bucketArn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nsubgraph Composite Resource\n    cs(status: \\n&nbsp bucketArn: <font color=red>arn&#58aws&#58s3&#58&#58&#58my-bucket</font>)\nend\n\nstyle cs text-align:left\nstyle ms text-align:left\nstyle ms2 text-align:left\n\nms --> cs\ncs --> ms2\n
"},{"location":"patterns/patching-101/#transform","title":"Transform","text":"

You can also perform modifications to values when patching. For example, you can use the following transformation to extract the accountId of this managed policy.

type: ToCompositeFieldPath\nfromFieldPath: status.policyArn\ntoFieldPath: status.accountId\ntransforms:\n  - type: string\n    string:\n      type: Regexp\n      regexp:\n        match: 'arn:aws:iam::(\\d+):.*'\n        group: 1\n

This tells Crossplane to: 1. Look at the status.policyArn field in the Managed Resource. 2. If the field has a value, take that value and run a regular expression match against it. 3. When there is a match, take the first capture group and store it in the status.accountId field in the Composite Resource.

flowchart LR\n\nsubgraph Managed Resource\n    ms(status: \\n&nbsp policyArn: arn:aws:iam::<font color=red>12345</font>:policy/my-policy)\nend\n\nsubgraph Composite Resource\n    cs(status: \\n&nbsp accountId: <font color=red>12345</font>)\nend\n\nstyle cs text-align:left\nstyle ms text-align:left\n\nms --regular expression match--> cs\n
"},{"location":"patterns/patching-101/#reference","title":"Reference","text":"

See the official documentation for more information. https://docs.crossplane.io/master/concepts/composition/#patch-types

"},{"location":"patterns/rds-day-2/","title":"RDS day 2 operations","text":""},{"location":"patterns/rds-day-2/#background-and-problem-statement","title":"Background and problem statement","text":"

Managing databases can be challenging because they are stateful, not easily replaceable, and data loss could have significant business impacts. An unexpected restart could cause havoc to applications that depend on them. Because of this, database users and administrators want to offload the management, maintenance, and availability of databases to another entity such as cloud providers. Amazon RDS is one of such services. Crossplane AWS provider aims to create building blocks for self-service experience for developers by providing abilities to manage AWS resources in Kubernetes native ways.

In Amazon RDS some operations require an instance restart. For example, version upgrade and storage size modification require an instance restart. RDS attempts to minimize impact of such operations by: 1. Define a scheduled maintenance window. 2. Queue changes that you want to make. Note that these changes may not need restarts. 3. During the next scheduled maintenance window, changes are applied.

This approach is fundamentally different from GitOps. In GitOps, when a change is checked into your repository, it is expected that actual resources are to match the specifications provided in the repository.

RDS supports applying these changes immediately instead of waiting for a scheduled maintenance window, and when using Crossplane AWS providers, they have the option to apply changes immediately as well. This is the option that should be used when using RDS with GitOps. However this leads to problems when enabling self service model where developers can provision resources on their own.

There are some problems when using the apply immediately option. - Updates made to certain fields would need a restart to take effect but this information may not be surfaced back to users. For example, changing the parameter group on an instance requires a restart but this information is not available in the Upbound Official provider. The community provider surface this information in a status field. In both providers, the status fields indicates Available and ReconcileSuccess. This could give end users an illusion of successful parameter changes, but in reality it has not taken effect yet. - Some field changes triggers an instance restart. For example, changing the instance class triggers a restart and potentially cause an outage. Developers may not know which fields would cause restarts because they are not familiar with underlying technologies. You could document potentially dangerous fields, but it is not enough to reliably stop it from happening.

The main goal of this document is to provide guidance on how to provide guardrails for end users when managing RDS resources through Crossplane.

"},{"location":"patterns/rds-day-2/#parameter-groups","title":"Parameter Groups","text":"

Parameter Groups define how the underlying database engine is configured. For example, if you wish to change the binlog_cache_size configuration value for your MySQL database, you can do that through parameter groups. A parameter group is not limited to be used by a single RDS instance. A parameter group can be used by multiple RDS instances.

In Parameter Groups, there are two types of parameters: dynamic and static. Dynamic parameters do not require a restart for their values to be applied to the running instance / cluster. Static parameters require a restart for their values to be applied. Additionally, dynamic parameters support specifying how changes to them are applied. When immediate is specified the changes to dynamic parameters are applied immediately. When pending-reboot is specified, the changes to dynamic parameters are applied during next restart or during the next maintenance window, whichever is earlier.

Since static parameters do not support immediate apply option, specifying this in your composition could lead to some unexpected errors. Therefore, extra care should be taken when exposing this resource to your end users. End users may not be aware of underlying engine specifications.

Summarizing everything above effectively means there are a few general approaches to managing RDS configuration changes.

  1. You want to ensure that parameter group values in the running cluster / instance match what is defined in your Git repository with no delay. The only certain way do this is by restarting the cluster/ instance during the reconciliation process.
  2. You can wait for parameter group changes to be applied during the next maintenance window. This means you may need to wait maximum 7 days for the changes to be applied.
  3. The change does not have to be applied immediately but it needs to happen sooner than 7 days. This requires a separate workflow to restart cluster / instance.
  4. Use the RDS Blue Green deployment feature.

For reference, problems encountered during parameter group updates in ACK and Terraform are discussed in this issue and this blog post.

"},{"location":"patterns/rds-day-2/#solutions","title":"Solutions","text":""},{"location":"patterns/rds-day-2/#considerations","title":"Considerations","text":"

As of writing this doc, there are 9 fields that require a restart to take effect when using a single RDS instance. There are 3 fields that require a restart when using multi-AZ instances. Unfortunately there is no native way to get these fields programmatically.

There are 188 static parameters in mysql8.0 family, and similar number of them are in other parameter group families as well. You can get a list of static parameters by using the aws rds describe-engine-default-parameters command.

These fields and parameters need to be stored for use by whatever check mechanism you choose, and they need to be updated regularly.

It is also worth pointing out that when a user updates a parameter in a parameter group, the changes to parameter groups themselves usually work without problems. However, it is often not the intention of these changes. The intention of changes is to change the parameter and apply it to a running instance. In both providers, changes to static parameters are not actually applied until the next maintenance window or a manual restart is issued.

We will discuss a few approaches to this problem below. Whichever approach you choose, it is important for the check mechanisms to work reliably. It's easy to lose users' trust when checks say there will be a restart but no restart happened. Or worse, checks did not detect potential restarts and caused an outage.

"},{"location":"patterns/rds-day-2/#check-during-pr","title":"Check during PR","text":"

Use Pull Request as a checkpoint and ensure developers are aware of potential consequences of the changes. An example process may look something like the following.

flowchart TD\n    Comment(Comment on PR)\n\n    subgraph Workflow\n        GetChangedFiles(Get changed files)\n        GetChangedFiles(Get changed files)\n        StepCheck(Will this cause a restart?)\n    end\n\n    subgraph Data Source \n        FieldDefinitions(Fields that need restarting)\n    end \n\n    Restart(Restart immediately)\n\n    FieldDefinitions <--reference--> StepCheck\n\n    PR(PR Created) --trigger--> GetChangedFiles --> StepCheck --yes--> Comment --> Approval(Wait for Approval) --> Merge --> GitOps(GitOps Tooling)\n    StepCheck--no--> Approval\n    GitOps --apply changes now --> Restart --> Done\n    GitOps --wait until next \\n maintenance window--> Done\n

In this example, whenever a pull request is created, a workflow is executed and a comment is created on the PR warning the developers of potential impacts. When developers approve the PR, it implies that they are aware of consequences. To check if a PR is impacted, you can use of the following options: - Parse git diff and search for changes to \"dangerous\" fields - Use kubectl diff then look for changes to \"dangerous\" fields. This requires read access to the target cluster but more accurate.

"},{"location":"patterns/rds-day-2/#check-at-runtime","title":"Check at runtime","text":"

Another approach is to deny such operation at runtime using a policy engine and/or custom validating web hook unless certain conditions are met. This means problems with RDS configuration is communicated to the developers through their GitOps tooling by providing reasons for denial. Note that it is a good idea to check at runtime even if you have a check during PR.

"},{"location":"patterns/rds-day-2/#example-1","title":"Example 1","text":"
flowchart LR\n    subgraph Kubernetes\n        ValidatingController(Policy Engine / Validating controller)\n    end \n\n    subgraph Git \n        PR(PR Merged)\n    end\n\n    subgraph Ticketing\n       Approved(Approved Changes)\n    end\n\n    GitOps(GitOps tooling)\n\n    PR(PR Merged) --> GitOps --> ValidatingController\n    ValidatingController --check--> Ticketing\n    ValidatingController --deny and provide reason--> GitOps\n    ValidatingController --Once Approved--> Restart\n

In the example above, no check is performed during PR. During admission into the Kubernetes cluster, a validating controller will reach out to the ticketing system and verify if this change is approved. If no ticket associated with this change is approved, it's rejected with provided reason.

Note that ticketing system here is just an example. It can be any type of systems that provides a decision.

"},{"location":"patterns/rds-day-2/#example-2","title":"Example 2","text":"

flowchart LR\n    subgraph Kubernetes\n        ConfigMap(ConfigMap w/ ticket numbers)\n        ValidatingController(Policy Engine / Validating controller)\n    end \n\n    subgraph Git\n        subgraph PR\n            Claim\n            Manifests(Other Manifests)\n        end\n    end\n\n    subgraph Ticketing\n       Approved(Approved Changes)\n    end\n\n    User\n    GitOps(GitOps tooling)\n\n    User --Create Ticket--> Ticketing\n    User --Annoate with ticket number--> Claim\n    PR(PR Merged) --> GitOps --> ValidatingController\n    ValidatingController --reference--> ConfigMap\n    ValidatingController --deny if not approved \\n and provide reason--> GitOps\n    Approved --create when the ticket \\n is approved--> ConfigMap\n    ValidatingController--Once Approved--> Restart\n
In this example, developer creates a ticket in the ticketing system and annotates the infrastructure claim with the ticket number. The admission controller checks if the change affects fields that require approval. If approval is required, the change is denied until the ticket is approved and the reason is given back to the GitOps tooling.

Once the ticket is approved, a config map is created with the ticket number as its name or as one of annotations. Next time the GitOps tooling attempts to apply manifests, the admission controller sees the ConfigMap is now created and allows it to be deployed. Once it is deployed, the ConfigMap can be marked for deletion. In this approach, there is no need for read access to the ticketing system.

"},{"location":"patterns/rds-day-2/#blue-green-deployment","title":"Blue Green deployment","text":"

RDS added native support for blue green deployment. This allows for safer database updates because RDS manages the process of creating an alternate instance, copying data over to it, and shifting traffic to it.

As of writing this doc, neither providers support this functionality. Because the functionality is available in Terraform, the Upbound official provider should be able to support this in the future. In addition, this functionality is supported for MariaDB and MySQL only.

"},{"location":"patterns/rds-day-2/#break-glass-scenarios","title":"Break glass scenarios","text":"

In case of an emergency where something unexpected ocurred and you need to stop providers from making changes to AWS resources, you can use one of the following methods: - To prevent providers from making changes to a specific resource, you can use the crossplane.io/paused annotation. e.g.

kubectl annotate instance.rds.aws.upbound.io my-instance crossplane.io/paused=true\n
- To prevent providers from making changes to ALL of your resources, you can update the number of replicas in ControllerConfig to 0. This will terminate the running pod. e.g.
apiVersion: pkg.crossplane.io/v1alpha1\nkind: ControllerConfig\nspec:\n  replicas: 0 # This value is usually 1. \n
- If you cannot access the cluster, you can prevent providers from making changes to all or some of your resources by either removing the policy associated with the IAM role or adjusting the policy to allow it to make changes to certain resources only.

"},{"location":"patterns/rds-day-2/#references","title":"References","text":"

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/modify-multi-az-db-cluster.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#blue_green_update

"},{"location":"patterns/vault-integration/","title":"Overview","text":""},{"location":"patterns/vault-integration/#goals","title":"Goals","text":"

In this doc, we will configure the following: - A Vault server (in-cluster or outside cluster) - A Crossplane installation with AWS provider on EKS - Provision a S3 bucket through Crossplane. - Publish bucket information as a Vault secret. - Access the published information in Vault from a pod using Vault Agent Injector

"},{"location":"patterns/vault-integration/#prerequisites","title":"Prerequisites","text":"

Following command line tools: - kubectl - helm - eksctl - aws

Note: - As of Crossplane 1.9.0, the support for external secret store is still in alpha state and may go under changes. - This assumes a use case for single-cluster multi-tenant. However, the underlying concepts discussed here should be applicable to multi-cluster setup as well. - This doc is based on the excellent external vault configuration guide. Please check these guides out for more detailed information.

"},{"location":"patterns/vault-integration/#procedure","title":"Procedure","text":""},{"location":"patterns/vault-integration/#provision-a-eks-cluster","title":"Provision a EKS cluster","text":"
# from this repository root\neksctl create cluster -f bootstrap/eksctl/eksctl.yaml\n
"},{"location":"patterns/vault-integration/#create-a-vault-service","title":"Create a Vault service","text":"

You can create a vault service in the same cluster as Crossplane or create a service on a VM.

"},{"location":"patterns/vault-integration/#in-cluster","title":"In-cluster","text":"

Follow: https://docs.crossplane.io/latest/guides/vault-as-secret-store/

"},{"location":"patterns/vault-integration/#on-an-external-vm","title":"On an external VM","text":"

This VM must be reachable by the Crossplane installation. If you are using an EC2 instance, routing, network ACL, and Security Groups must be configured to allow for traffic from Crossplane pod to the VM.

Commands below assumes the VM is an Ubuntu instance.

"},{"location":"patterns/vault-integration/#install-vault","title":"Install Vault","text":"

Run the following commands in your VM.

Install vault on Ubuntu following the vault docs

Configure vault

sudo systemctl enable vault.service\n\n# create a configuration file for vault. NOTE: this creates a vault service with TLS disabled. \n# This is done to make the configuration step easy to follow only. TLS should be enabled for real workloads.\ncat <<< 'ui = true\n\nstorage \"file\" {\n  path = \"/opt/vault/data\"\n}\n\nlistener \"tcp\" {\n  address = \"0.0.0.0:8200\"\n  tls_disable = 1\n}' | sudo -u vault tee /etc/vault.d/vault.hcl > /dev/null\n\nsudo systemctl start vault.service\n\nexport VAULT_ADDR='http://127.0.0.1:8200'\n# This command will print out unseal keys and the root token.\nvault operator init\nvault operator unseal # do this three times. each time with a different unseal key.\nvault secrets enable -path=secret kv-v2\nvault auth enable kubernetes\n

Get the IP address of this instance. For an EC2 instance, it should be the private IP of the instance. For a simple EC2 instance:

aws ec2 describe-instances \\\n--filters Name=instance-id,Values=<INSERT_INSTANCE_ID_HERE> \\\n| jq \".Reservations[0].Instances[0].NetworkInterfaces[0].PrivateIpAddress\"\n

"},{"location":"patterns/vault-integration/#install-vault-agent-sidecar-injector","title":"Install Vault Agent Sidecar Injector","text":"

Rut the following commands from a place where you have access to your Kubernetes cluster, e.g. your laptop. The Vault Agent Sidecar injector looks for CREATE and UPDATE events, then it will inject vault secret into the containers.

kubectl create ns vault-system\n# install vault injector. be sure to use the IP address obtained above.\nhelm -n vault-system install vault hashicorp/vault \\\n    --set \"injector.externalVaultAddr=http://<PRIVATE_IP_ADDRESS>:8200\"\n\nTOKEN_REVIEW_JWJ=$(kubectl -n vault-system get secret $(kubectl -n vault-system get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith(\"vault-token-\")).name') --output='go-template={{ .data.token }}' | base64 --decode)\nKUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')\nKUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)\nISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)\n

Configure Kubernetes authentication, policy, and role for Crossplane to use in your VM:

vault write auth/kubernetes/config \\\n     token_reviewer_jwt=\"$TOKEN_REVIEW_JWT\" \\\n     kubernetes_host=\"$KUBE_HOST\" \\\n     kubernetes_ca_cert=\"$KUBE_CA_CERT\" \\\n     issuer=$ISSUER\n\nvault policy write crossplane - <<EOF\npath \"secret/data/crossplane-system*\" {\n    capabilities = [\"create\", \"read\", \"update\", \"delete\"]\n}\npath \"secret/metadata/crossplane-system*\" {\n    capabilities = [\"create\", \"read\", \"update\", \"delete\"]\n}\nEOF\n\nvault write auth/kubernetes/role/crossplane \\\n    bound_service_account_names=\"*\" \\\n    bound_service_account_namespaces=crossplane-system \\\n    policies=crossplane \\\n    ttl=24h\n
"},{"location":"patterns/vault-integration/#configure-vault","title":"Configure Vault","text":"

For our test cases to work, we need to configure additional Vault policy and role. Run the following commands in your vault pod or VM.

# {% raw %}\n# create policy and role for applications to use.\nACCESSOR=$(vault auth list | grep kubernetes | tr -s ' ' | cut -d ' ' -f3)\n\nvault policy write k8s-application - << EOF\npath \"secret/data/crossplane-system/{{identity.entity.aliases.${ACCESSOR}.metadata.service_account_namespace}}/*\" {\n  capabilities = [\"read\", \"list\"]\n}\npath \"secret/metadata/crossplane-system/{{identity.entity.aliases.${ACCESSOR}.metadata.service_account_namespace}}/*\" {\n  capabilities = [\"read\", \"list\"]\n}\nEOF\n\nvault write auth/kubernetes/role/k8s-application \\\n    bound_service_account_names=\"*\" \\\n    bound_service_account_namespaces=\"*\" \\\n    policies=k8s-application \\\n    ttl=1h\n\n# {% endraw %}\n
"},{"location":"patterns/vault-integration/#install-and-configure-crossplane","title":"Install and configure Crossplane","text":"

Crossplane must be configured with external secret store support. In addition, the Crossplane pod must have access to the vault token.

kubectl create ns crossplane-system\nhelm upgrade --install crossplane crossplane-stable/crossplane --namespace crossplane-system \\\n  --version 1.10.0 \\\n  --set 'args={--enable-external-secret-stores}' \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/agent-inject\"=true \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/agent-inject-token\"=true \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/role\"=crossplane \\\n  --set-string customAnnotations.\"vault\\.hashicorp\\.com/agent-run-as-user\"=65532\n

Once Crossplane is installed, install its AWS provider.

Update the AWS provider YAML file with your role ARN, then execute the following commands.

kubectl apply -f bootstrap/eksctl/crossplane/aws-provider-vault-secret.yaml\nkubectl get ProviderRevision\n# example output\n# NAME                        HEALTHY   REVISION   IMAGE                             STATE    DEP-FOUND   DEP-INSTALLED   AGE\n# provider-aws-a2e16ca2fc1a   True      1          crossplane/provider-aws:v0.29.0   Active                               23s\n

StoreConfig objects provides Crossplane and its providers information about how to connect to secret stores. These objects must be configured for external secret integrations to work.

Update the store config YAML file with your endpoint information. If you configured vault outside of the cluster, it should be the private IP address. e.g. 10.0.0.1:8200

kubectl apply -f bootstrap/eksctl/crossplane/store-config-vault.yaml\n\necho \"apiVersion: aws.crossplane.io/v1beta1\nkind: ProviderConfig\nmetadata:\n  name: application1-provider-config\nspec:\n  credentials:\n    source: InjectedIdentity\" | kubectl apply -f - \n

This creates two configurations for secrets stores: - A configuration named in-cluster for Crossplane (compositions). This tells Crossplane to store composition secrets in the same cluster as Kubernetes secrets. - Another configuration named vault for AWS provider. This tells the provider to store secrets the vault instance under the /secret/crossplane-system namespace. To access the vault instance, a token is created by the sidecar at /vault/secrets/token.

"},{"location":"patterns/vault-integration/#create-compositions","title":"Create compositions","text":"

Apply the S3 compositions:

kubectl apply -f compositions/aws-provider/s3\n

The composition that is of interest is compositions/aws-provider/s3/multi-tenant.yaml. This composition demonstrates the following: - ProviderConfig selection based on the claim's namespace. - Publishes bucket information to Kubernetes secrets and Vault. - Published Vault secrets are created under the claim's namespace in Vault.

"},{"location":"patterns/vault-integration/#test-compositions","title":"Test compositions","text":"

Try creating a bucket claim in the default namespace

kubectl apply -f examples/aws-provider/composite-resources/s3/multi-tenant.yaml\n
Then inspect the events for the bucket:
kubectl describe bucket\n# example events\n# Events:\n#  Type     Reason                   Age               From                                 Message\n#  ----     ------                   ----              ----                                 -------\n#  Warning  CannotConnectToProvider  1s (x5 over 14s)  managed/bucket.s3.aws.crossplane.io  cannot get referenced Provider: ProviderConfig.aws.crossplane.io \"default-provider-config\" not found\n
In the claim file, we specify a provider config name. However, this is patched out to use the provider config with name <NAMESPACE>-provider-config. This is why the error message indicates provider config with name default-provider-config is not found.

Since we created a provider config named application1-provider-config, we should be able to create a claim in namespace called application1.

#create namespace\nkubectl create ns application1 || true\n# create in new namespace\nkubectl apply -n application1 -f examples/aws-provider/composite-resources/s3/multi-tenant.yaml\n\nkubectl -n application1 get objectstorage\n# NAME                      READY   CONNECTION-SECRET   AGE\n# standard-object-storage   True                        22s\n

Once the claim reaches the ready state, you should be able to verify. Secret creation:

kubectl -n crossplane-system get secret `kubectl get xobjectstorage -o json | jq -r '.items[0].metadata.uid'` -o go-template='{{range $k,$v := .data}}{{printf \"%s: \" $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{\"\\n\"}}{{end}}'\n# example output\n# bucket-name: standard-object-storage-qlgvz-hz2dn\n# region: us-west-2\n

The same information should be available in Vault:

# in your vault installation\nvault kv get secret/crossplane-system/application1/dev/bucket\n# ==================== Secret Path ====================\n# secret/data/crossplane-system/application1/dev/bucket\n#\n# ======= Metadata =======\n# Key                Value\n# ---                -----\n# created_time       2022-07-22T20:51:27.852598176Z\n# custom_metadata    map[awsblueprints.io/composition-name:s3bucket-multi-tenant.awsblueprints.io awsblueprints.io/environment:dev awsblueprints.io/provider:aws secret.crossplane.io/owner-uid:0c601153-358d-45e1-8e0a-0f34991bed82]\n# deletion_time      n/a\n# destroyed          false\n# version            1\n#\n# ====== Data ======\n# Key         Value\n# ---         -----\n# endpoint    standard-object-storage-4p2wr-lxb74\n# region      us-west-2\n
"},{"location":"patterns/vault-integration/#test-applications","title":"Test Applications","text":"

Vault sidecar injector can inject secrets into pods. Create an example pod that access the secret created by the sidecar

echo 'apiVersion: v1\nkind: Pod\nmetadata:\n  name: test-pod\n  annotations:\n    vault.hashicorp.com/agent-inject: \"true\"\n    vault.hashicorp.com/role: \"k8s-application\"\n    vault.hashicorp.com/agent-inject-secret-credentials.txt: \"secret/crossplane-system/application1/dev/bucket\"\nspec:\n  containers:\n    - name: busybox\n      image: busybox:1.28\n      command:\n        - sh\n        - -c\n        - echo \"Hello there!\" && cat /vault/secrets/credentials.txt  && sleep 3600' | kubectl apply -f - \n

This will create an pod in the default namespace. However, the pod will not reach the ready state. Check the logs:

kubectl logs  test-pod vault-agent-init\n# URL: GET http://192.168.67.77:8200/v1/secret/data/crossplane-system/application1/dev/bucket\n# Code: 403. Errors:\n\n# * 1 error occurred:\n#   * permission denied\n

This is because the pod is created in the default namespace and the Vault policy we configured earlier does not allow it to access secrets in another namespace.

Try creating the pod in the correct namespace.

echo 'apiVersion: v1\nkind: Pod\nmetadata:\n  name: test-pod\n  namespace: application1\n  annotations:\n    vault.hashicorp.com/agent-inject: \"true\"\n    vault.hashicorp.com/role: \"k8s-application\"\n    vault.hashicorp.com/agent-inject-secret-credentials.txt: \"secret/crossplane-system/application1/dev/bucket\"\nspec:\n  containers:\n    - name: busybox\n      image: busybox:1.28\n      command:\n        - sh\n        - -c\n        - echo \"Hello there!\" && cat /vault/secrets/credentials.txt  && sleep 3600' | kubectl apply -f - \n
The pod should reach ready state.

kubectl -n application1 logs test-pod busybox\n# Hello there!\n# data: map[endpoint:standard-object-storage-qlgvz-hz2dn region:us-west-2]\n# metadata: map[created_time:2022-07-21T21:27:38.82988124Z custom_metadata:map[awsblueprints.io/composition-name:s3bucket-multi-tenant.awsblueprints.io awsblueprints.io/environment:dev awsblueprints.io/provider:aws secret.crossplane.io/owner-uid:5089919f-e80f-4889-80f4-c8e3cacd8fb7] deletion_time: destroyed:false version:1]\n
"}]} \ No newline at end of file