Skip to content

Releases: opendatacube/datacube-k8s-eks

v1.10.4

21 Dec 21:24
8210a2d
Compare
Choose a tag to compare
  • CloudFront - Add configurable header forwarding #276

v1.10.3

14 Dec 04:48
f1e654a
Compare
Choose a tag to compare
  • added parameter to optionally configure eks worker node ebs root volume type #275
  • added support to configure cognito userpool schemas #274

v1.10.2

07 Nov 23:46
06514bc
Compare
Choose a tag to compare

v1.10.1

26 Aug 06:05
b41708d
Compare
Choose a tag to compare
  • updated cognito app-client configuration - #272

v1.9.5

02 Aug 05:27
fc35bae
Compare
Choose a tag to compare
  • cognito - support token validity configuration

Release v1.9.1 TF 0.13 module release

23 Dec 01:15
777a4d6
Compare
Choose a tag to compare
  • TF 0.13 module release TAG v1.9.0: #249
  • ows_eks module bug fix #256

Release v1.8.1 datacube-k8s-eks

26 Jun 07:32
9b072b1
Compare
Choose a tag to compare

WARNING: This release whilst minor in functionality change includes a restructure to several core components. If you wish to apply this refactor to an existing deployment in a non-destructive fashion you will need to use terraform state mv as described below to move your existing state into the new configuration. You should also consider what impacts the change might have on your upstream live configuration and plan accordingly.

Release summary

This release includes bug fixes to several areas and improves upstream configuration options like:

  • BYO Database - the database module has been separated from the
  • Flux CD upgrade to support Helm v3 (v2 is still supported but will be deprecated)
  • Cognito User Pool can now be used with multiple applications (jupyterhub, airflow, prometheus...)

The most significant changes can be found in PRs #226 #225 #221 #209.

Applying the changes

We recommend doing the changes incrementally to ensure your state movements are performed with minimal breakage. Careful use of terraform plan and reading the impacts will guide you very effectively. READ THESE RELEASE NOTES AND ENSURE YOU UNDERSTAND THE CHANGES PRIOR TO RUNNING terraform apply

That's two WARNINGS so far to be deliberate about this process. If you take the time you will gain much and lose nothing.

The following notes are provided for the more significant changes:

Flux CD

flux and helm-operator version upgrade to support helm2 and helm3 releases

This will create new flux and helm-operator helm release resources. So you need to update flux deployment key again for your live repo that flux monitors for new updates. Also make sure you add following configuration to your HelmReleses before doing this upgrade so flux doesn't attempt to move to helm v3 prematurely.

spec:
  chart:
    name: <name>
    repository: <repo-url>
  releaseName: <relese-name>
  helmVersion: v2

WARNING -Make db module external in odc_eks to allow BYO

Expose odc_rds module (used to be a submodule - database_layer - of odc_eks module) to solve #224

This change will DESTROY your database setup if you don't manage state path changes as it will re-provision all db resources. To over come this issue, please follow below steps -

step 1: Create a db module into your live repo - Add db.tf file onto your live odc_eks module and adjust the configurations.

e.g. -

module "db" {
  # Note: Use tag rather than master branch
  source = "github.com/opendatacube/datacube-k8s-eks//odc_rds?ref=master"

  # Label prefix for db resources
  name = module.odc_cluster_label.id

  # Networking
  vpc_id                = module.odc_eks.vpc_id
  database_subnet_group = module.odc_eks.database_subnets

  db_name     = local.db_name
  db_multi_az = local.db_multi_az
  # extra_sg could be empty, so we run compact on the list to remove it if it is
  access_security_groups = [module.odc_eks.node_security_group]
  #Engine version
  engine_version = local.db_engine_version


  # Tags
  owner       = local.owner
  namespace   = local.namespace
  environment = local.environment
}

step 2: Execute terraform plan - This will shows that all db related resource will be destroyed from module.odc_eks and new db resources will be provisioned under module.db

step 3: Manually move all db resources to new module using terraform state mv https://www.terraform.io/docs/commands/state/mv.html

  • Download backend state file for your odc_eks module for backup. e.g. our called odc_eks_terraform.tfstate.
  • Execute below terraform state move command. This will updates your state file -
cd odc_eks/
terraform state list | grep "module.odc_eks.module.db"

terraform state mv 'module.odc_eks.module.db.random_string.password' 'module.db.random_string.password'

terraform state mv 'module.odc_eks.module.db.aws_security_group.rds' 'module.db.aws_security_group.rds'

terraform state mv 'module.odc_eks.module.db.aws_db_subnet_group.db_sg' 'module.db.aws_db_subnet_group.db_sg'

terraform state mv 'module.odc_eks.module.db.aws_db_instance.db' 'module.db.aws_db_instance.db'

# Verify state resource list again
terraform state list | grep "module.odc_eks.module.db"

step 4: Execute a plan again and there should be no changes to update. Apply your changes and DONE.

WARNING - Cognito updates

The cognito module has been updated to support multiple apps and the change is sufficient that it can result in a destroy/create cycle for existing resources if the old resources are not moved to the new state path. This PR enhance cognito module to solve issue #226 and #210

If you are migrating from odc terraform version v1.8.0, then you are require to change cognito.tf configuration as per README. e.g. -

  module "cognito_auth" {
    source = "github.com/opendatacube/datacube-k8s-eks//cognito?ref=master"
    
    auto_verify = true
    user_pool_name       = "odc-stage-cluster-userpool"
    user_pool_domain     = "odc-stage-cluster-auth"
    user_groups = {
      "dev-group" = {
        "description" = "Group defines Jupyterhub dev users"
        "precedence"  = 5
      },
      "default-group" = {
        "description" = "Group defines Jupyterhub default users"
        "precedence"  = 10
      }
    }
    app_clients = {
      "jupyterhub-client" = {
        callback_urls = [
          "https://app.jupyterhub.example.com/oauth_callback",
          "https://app.jupyterhub.example.com"
        ]
        logout_urls   = [
          "https://app.jupyterhub.example.com"
        ]
        default_redirect_uri = "app.jupyterhub.example.com"
        explicit_auth_flows = ["ALLOW_REFRESH_TOKEN_AUTH", "ALLOW_USER_SRP_AUTH", "ALLOW_CUSTOM_AUTH"]
      }
    }
    
    # Default tags + resource labels
    owner           = "odc-owner"
    namespace       = "odc"
    environment     = "stage"
  }

  output "cognito_auth_userpool_jhub_client_id" {
    value     = module.cognito_auth.client_ids["sandbox-client"]
    sensitive = true
  }

  output "cognito_auth_userpool_jhub_client_secret" {
    value     = module.cognito_auth.client_secrets["sandbox-client"]
    sensitive = true
  }
  • Also this will recreates cognito user pool app-clients and user-groups so if you don't wont it then migrate terraform state as explained below -
**step 1:** Download `odc_eks` module latest state file - just for sanity

**step 2:** Move cognito states using terraform state mv command for all the resources for - `aws_cognito_user_group` and `aws_cognito_user_pool_client`(optional)
e.g.
 terraform state mv 'module.cognito_auth.aws_cognito_user_pool_client.clients[0]' 'module.cognito_auth.aws_cognito_user_pool_client.clients["sandbox-client"]'

terraform state mv 'module.cognito_auth.aws_cognito_user_group.group[0]' 'module.cognito_auth.aws_cognito_user_group.group["dev-group"]'

**step 3:** Execute terraform state plant to validate changes