Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster data source #21

Closed
wants to merge 34 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
ca59516
Add standard deployment resource
vandyliu Apr 15, 2024
ef2c035
fix
vandyliu Apr 15, 2024
9fb899e
fix up
vandyliu Apr 15, 2024
37390e7
fix tests
vandyliu Apr 15, 2024
eb3fa15
fix
vandyliu Apr 15, 2024
0eb9300
fix
vandyliu Apr 15, 2024
93877b0
fixup
vandyliu Apr 15, 2024
62cd3c1
fix
vandyliu Apr 15, 2024
9fd236f
use hosted org for tests
vandyliu Apr 15, 2024
9db91f3
fix tests
vandyliu Apr 15, 2024
1805bf9
update precommit hook
vandyliu Apr 15, 2024
a057f72
update err msgs
vandyliu Apr 15, 2024
d0f96d3
fix err msgs
vandyliu Apr 15, 2024
314367c
move to single deployment resource
vandyliu Apr 15, 2024
aa9c798
small fixups
vandyliu Apr 15, 2024
3fde542
cluster data source
vandyliu Apr 16, 2024
7e59960
use diff cluster
vandyliu Apr 16, 2024
eb82b36
add examples, update readme
vandyliu Apr 16, 2024
14e8789
add terraform init
vandyliu Apr 16, 2024
d0310d9
refactor
vandyliu Apr 16, 2024
df1f650
Merge branch 'standard-dep-resource' into cluster-data-source
vandyliu Apr 16, 2024
cae5cf0
refactor
vandyliu Apr 16, 2024
01341de
add more validation
vandyliu Apr 16, 2024
f72333b
add example
vandyliu Apr 16, 2024
44e1b56
update schema
vandyliu Apr 16, 2024
6961cbe
fix test
vandyliu Apr 16, 2024
74363d6
remove unused var
vandyliu Apr 16, 2024
9d7ab0f
cluster model
vandyliu Apr 16, 2024
049c584
updating workspace id recreates dep
vandyliu Apr 16, 2024
9bc1047
var rename
vandyliu Apr 16, 2024
5e9e7b9
pr comments
vandyliu Apr 17, 2024
1b19a84
Merge branch 'standard-dep-resource' into cluster-data-source
vandyliu Apr 17, 2024
884b83e
merge w main
vandyliu Apr 17, 2024
140e69d
Apply suggestions from code review
vandyliu Apr 17, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 82 additions & 0 deletions docs/data-sources/cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "astronomer_cluster Data Source - astronomer"
subcategory: ""
description: |-
Cluster data source
---

# astronomer_cluster (Data Source)

Cluster data source

## Example Usage

```terraform
data "astronomer_cluster" "example" {
id = "clozc036j01to01jrlgvueo8t"
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `id` (String) Cluster identifier

### Read-Only

- `cloud_provider` (String) Cluster cloud provider
- `created_at` (String) Cluster creation timestamp
- `db_instance_type` (String) Cluster database instance type
- `is_limited` (Boolean) Whether the cluster is limited
- `metadata` (Attributes) Cluster metadata (see [below for nested schema](#nestedatt--metadata))
- `name` (String) Cluster name
- `node_pools` (Attributes List) Cluster node pools (see [below for nested schema](#nestedatt--node_pools))
- `pod_subnet_range` (String) Cluster pod subnet range
- `provider_account` (String) Cluster provider account
- `region` (String) Cluster region
- `service_peering_range` (String) Cluster service peering range
- `service_subnet_range` (String) Cluster service subnet range
- `status` (String) Cluster status
- `tags` (Attributes List) Cluster tags (see [below for nested schema](#nestedatt--tags))
- `tenant_id` (String) Cluster tenant ID
- `type` (String) Cluster type
- `updated_at` (String) Cluster last updated timestamp
- `vpc_subnet_range` (String) Cluster VPC subnet range
- `workspace_ids` (List of String) Cluster workspace IDs

<a id="nestedatt--metadata"></a>
### Nested Schema for `metadata`

Read-Only:

- `external_ips` (List of String) Cluster external IPs
- `oidc_issuer_url` (String) Cluster OIDC issuer URL


<a id="nestedatt--node_pools"></a>
### Nested Schema for `node_pools`

Read-Only:

- `cloud_provider` (String) Node pool cloud provider
- `cluster_id` (String) Node pool cluster identifier
- `created_at` (String) Node pool creation timestamp
- `id` (String) Node pool identifier
- `is_default` (Boolean) Whether the node pool is the default node pool of the cluster
- `max_node_count` (Number) Node pool maximum node count
- `name` (String) Node pool name
- `node_instance_type` (String) Node pool node instance type
- `supported_astro_machines` (List of String) Node pool supported Astro machines
- `updated_at` (String) Node pool last updated timestamp


<a id="nestedatt--tags"></a>
### Nested Schema for `tags`

Read-Only:

- `key` (String) Cluster tag key
- `value` (String) Cluster tag value
3 changes: 3 additions & 0 deletions examples/data-sources/astronomer_cluster/data-source.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
data "astronomer_cluster" "example" {
id = "clozc036j01to01jrlgvueo8t"
}
117 changes: 117 additions & 0 deletions internal/provider/datasources/data_source_cluster.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
package datasources

import (
"context"
"fmt"

"github.com/astronomer/astronomer-terraform-provider/internal/clients"
"github.com/astronomer/astronomer-terraform-provider/internal/clients/platform"
"github.com/astronomer/astronomer-terraform-provider/internal/provider/models"
"github.com/astronomer/astronomer-terraform-provider/internal/provider/schemas"
"github.com/astronomer/astronomer-terraform-provider/internal/utils"
"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/datasource/schema"
"github.com/hashicorp/terraform-plugin-log/tflog"
)

// Ensure provider defined types fully satisfy framework interfaces.
var _ datasource.DataSource = &clusterDataSource{}
var _ datasource.DataSourceWithConfigure = &clusterDataSource{}

func NewClusterDataSource() datasource.DataSource {
return &clusterDataSource{}
}

// clusterDataSource defines the data source implementation.
type clusterDataSource struct {
PlatformClient platform.ClientWithResponsesInterface
OrganizationId string
}

func (d *clusterDataSource) Metadata(
ctx context.Context,
req datasource.MetadataRequest,
resp *datasource.MetadataResponse,
) {
resp.TypeName = req.ProviderTypeName + "_cluster"
}

func (d *clusterDataSource) Schema(
ctx context.Context,
req datasource.SchemaRequest,
resp *datasource.SchemaResponse,
) {
resp.Schema = schema.Schema{
// This description is used by the documentation generator and the language server.
MarkdownDescription: "Cluster data source",
Attributes: schemas.ClusterDataSourceSchemaAttributes(),
}
}

func (d *clusterDataSource) Configure(
ctx context.Context,
req datasource.ConfigureRequest,
resp *datasource.ConfigureResponse,
) {
// Prevent panic if the provider has not been configured.
if req.ProviderData == nil {
return
}

apiClients, ok := req.ProviderData.(models.ApiClientsModel)
if !ok {
utils.DataSourceApiClientConfigureError(ctx, req, resp)
return
}

d.PlatformClient = apiClients.PlatformClient
d.OrganizationId = apiClients.OrganizationId
}

func (d *clusterDataSource) Read(
ctx context.Context,
req datasource.ReadRequest,
resp *datasource.ReadResponse,
) {
var data models.Cluster

// Read Terraform configuration data into the model
resp.Diagnostics.Append(req.Config.Get(ctx, &data)...)
if resp.Diagnostics.HasError() {
return
}

cluster, err := d.PlatformClient.GetClusterWithResponse(
ctx,
d.OrganizationId,
data.Id.ValueString(),
)
if err != nil {
tflog.Error(ctx, "failed to get cluster", map[string]interface{}{"error": err})
resp.Diagnostics.AddError(
"Client Error",
fmt.Sprintf("Unable to read cluster, got error: %s", err),
)
return
}
_, diagnostic := clients.NormalizeAPIError(ctx, cluster.HTTPResponse, cluster.Body)
if diagnostic != nil {
resp.Diagnostics.Append(diagnostic)
return
}
if cluster.JSON200 == nil {
tflog.Error(ctx, "failed to get cluster", map[string]interface{}{"error": "nil response"})
resp.Diagnostics.AddError("Client Error", "Unable to read cluster, got nil response")
return
}

// Populate the model with the response data
diags := data.ReadFromResponse(ctx, cluster.JSON200)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
}

// Save data into Terraform state
resp.Diagnostics.Append(resp.State.Set(ctx, &data)...)
}
49 changes: 49 additions & 0 deletions internal/provider/datasources/data_source_cluster_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
package datasources_test

import (
"fmt"
"os"
"testing"

astronomerprovider "github.com/astronomer/astronomer-terraform-provider/internal/provider"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
)

func TestAcc_DataSourceCluster(t *testing.T) {
hybridClusterId := os.Getenv("HYBRID_CLUSTER_ID")
resourceName := "test_data_cluster_hybrid"
resourceVar := fmt.Sprintf("data.astronomer_cluster.%v", resourceName)
resource.Test(t, resource.TestCase{
PreCheck: func() {
astronomerprovider.TestAccPreCheck(t)
},
ProtoV6ProviderFactories: astronomerprovider.TestAccProtoV6ProviderFactories,
Steps: []resource.TestStep{
{
Config: astronomerprovider.ProviderConfig(t, false) + cluster(resourceName, hybridClusterId),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet(resourceVar, "id"),
resource.TestCheckResourceAttrSet(resourceVar, "name"),
resource.TestCheckResourceAttrSet(resourceVar, "cloud_provider"),
resource.TestCheckResourceAttrSet(resourceVar, "db_instance_type"),
resource.TestCheckResourceAttrSet(resourceVar, "region"),
resource.TestCheckResourceAttrSet(resourceVar, "vpc_subnet_range"),
resource.TestCheckResourceAttrSet(resourceVar, "created_at"),
resource.TestCheckResourceAttrSet(resourceVar, "updated_at"),
resource.TestCheckResourceAttr(resourceVar, "type", "HYBRID"),
resource.TestCheckResourceAttrSet(resourceVar, "provider_account"),
resource.TestCheckResourceAttrSet(resourceVar, "node_pools.0.id"),
resource.TestCheckResourceAttrSet(resourceVar, "node_pools.0.name"),
resource.TestCheckResourceAttrSet(resourceVar, "metadata.external_ips.0"),
),
},
},
})
}

func cluster(resourceName, clusterId string) string {
return fmt.Sprintf(`
data astronomer_cluster "%v" {
id = "%v"
}`, resourceName, clusterId)
}
Loading