Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot Import ACLs using client.KafkaRestEndpoint #528

Open
tthorne-trayport opened this issue Jan 15, 2025 · 3 comments
Open

Cannot Import ACLs using client.KafkaRestEndpoint #528

tthorne-trayport opened this issue Jan 15, 2025 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@tthorne-trayport
Copy link

Currently the Import function for the "resource_kafka_acl" resource requires that you set an Environment variable "IMPORT_KAFKA_REST_ENDPOINT"

As per the code in this block.

I would expect that if it cannot find the Environment variable "IMPORT_KAFKA_REST_ENDPOINT" or client.KafkaRestEndpoint then it throws an error, but I would prefer that it uses the value provided at client.KafkaRestEndpoint so that I can have multiple cluster rest endpoints per set of API Keys.

@tthorne-trayport
Copy link
Author

I would suggest that this block.

Is modified as per something like:

if isImportOperation {
    restEndpoint := getEnv("IMPORT_KAFKA_REST_ENDPOINT", "")
    if restEndpoint == "" {
        restEndpoint = client.kafkaRestEndpoint
    }
    if restEndpoint != "" {
        return restEndpoint, nil
    } else {
        return "", fmt.Errorf("one of provider.kafka_rest_endpoint (defaults to KAFKA_REST_ENDPOINT environment variable) or IMPORT_KAFKA_REST_ENDPOINT environment variable must be set")
    }
}

@sajjadlateef sajjadlateef added the enhancement New feature or request label Jan 21, 2025
@linouk23 linouk23 self-assigned this Jan 28, 2025
@linouk23
Copy link
Contributor

Thanks for creating this issue @tthorne-trayport!

Could you confirm whether you're using Option 1 or Option 2?

[Option #1: Manage multiple Kafka clusters in the same Terraform workspace](https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/resources/confluent_kafka_acl#option-1-manage-multiple-kafka-clusters-in-the-same-terraform-workspace)
provider "confluent" {
  cloud_api_key    = var.confluent_cloud_api_key    # optionally use CONFLUENT_CLOUD_API_KEY env var
  cloud_api_secret = var.confluent_cloud_api_secret # optionally use CONFLUENT_CLOUD_API_SECRET env var
}

[Option #2: Manage a single Kafka cluster in the same Terraform workspace](https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/resources/confluent_kafka_acl#option-2-manage-a-single-kafka-cluster-in-the-same-terraform-workspace)
provider "confluent" {
  kafka_id            = var.kafka_id                   # optionally use KAFKA_ID env var
  kafka_rest_endpoint = var.kafka_rest_endpoint        # optionally use KAFKA_REST_ENDPOINT env var
  kafka_api_key       = var.kafka_api_key              # optionally use KAFKA_API_KEY env var
  kafka_api_secret    = var.kafka_api_secret           # optionally use KAFKA_API_SECRET env var
}
# Option #1: Manage multiple Kafka clusters in the same Terraform workspace
$ export IMPORT_KAFKA_API_KEY="<kafka_api_key>"
$ export IMPORT_KAFKA_API_SECRET="<kafka_api_secret>"
$ export IMPORT_KAFKA_REST_ENDPOINT="<kafka_rest_endpoint>"
$ terraform import confluent_kafka_acl.describe-cluster "lkc-12345/CLUSTER#kafka-cluster#LITERAL#User:sa-xyz123#*#DESCRIBE#ALLOW"

# Option #2: Manage a single Kafka cluster in the same Terraform workspace
$ export CONFLUENT_CLOUD_API_KEY="<cloud_api_key>"
$ export CONFLUENT_CLOUD_API_SECRET="<cloud_api_secret>"
$ terraform import confluent_kafka_acl.describe-cluster "lkc-12345/CLUSTER#kafka-cluster#LITERAL#User:sa-xyz123#*#DESCRIBE#ALLOW"

It seems like you're using Option 1 because you're attempting to set IMPORT_KAFKA_REST_ENDPOINT. If that's correct, then we might not want to patch the code since the expectation is that a user doesn't pass

provider "confluent" {
  ...
  kafka_rest_endpoint = var.kafka_rest_endpoint        # optionally use KAFKA_REST_ENDPOINT env var

when using Option 1. What do you think?

@tthorne-trayport
Copy link
Author

tthorne-trayport commented Jan 29, 2025

Hi @linouk23 - thanks for your response.

I am using Option 1, I provide an API Key and Secret via Environment Variables, rather than the provider block. Some context around our deployment:

We have a Terraform Directory/Project per Environment (Production, Staging, Test etc.) The Environments have multiple Clusters in them, which poses an issue because in my Deployment Pipeline I can only specify a single Environment Variable per Environment, and therefore cannot have multiple "KAFKA_REST_ENDPOINT"s. This is a problem for me, because my Cluster's within an Environment do not all have the same Rest Endpoints.

Therefore, when I provide the Import block:

import {
  for_each = local.confluent_service_accounts_with_acls
  to       = module.confluent_cloud_kafka_acl.confluent_kafka_acl.kafka_acl[each.key]
  id       = "${each.value.cluster_id}/CLUSTER#kafka-cluster#${each.value.pattern_type}#User:${each.value.principal_id}#${each.value.resource_name}#${each.value.operation}#${each.value.permission}"
}

I am seeing the error:

error importing Kafka Topic: one of provider.kafka_rest_endpoint (defaults to KAFKA_REST_ENDPOINT environment variable) or IMPORT_KAFKA_REST_ENDPOINT environment variable must be set

So my question is, if you are not expecting me to pass "IMPORT_KAFKA_REST_ENDPOINT", which I am not, why am I getting that error?

In the situation where "IMPORT_KAFKA_REST_ENDPOINT" is required, and you have multiple Clusters per environment with differing Rest Endpoints I believe you will have to change to a Terraform Directory/Project per Cluster, or Deployment Pipeline/Stage per Cluster to provide this. Which creates a "special case" for ACLs in our deployment model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants