From b0ad007d2cde946257191853e79824f9fa2a4042 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Sat, 17 Aug 2024 01:23:30 -0400 Subject: [PATCH 1/4] ci: Remove Synk ignore for vulnerability for jackc/pgx/v4 (#13481) Signed-off-by: Yuan Tang --- .snyk | 7 ------- 1 file changed, 7 deletions(-) delete mode 100644 .snyk diff --git a/.snyk b/.snyk deleted file mode 100644 index 4e8dc58f1dfe..000000000000 --- a/.snyk +++ /dev/null @@ -1,7 +0,0 @@ -# Snyk (https://snyk.io) policy file, patches or ignores known vulnerabilities -version: v1.25.0 -ignore: - SNYK-GOLANG-GITHUBCOMJACKCPGXV4-7416900: - - '*': - reason: "False Positive: vuln is for pgx/v5, not pgx/v4: https://pkg.go.dev/vuln/GO-2024-2567" - expires: 2024-09-01T11:11:11.001Z From 4b47f43d0ca12433a7665596cb6371ca531100be Mon Sep 17 00:00:00 2001 From: Kavish Dahekar <160044920+kavishdahekar-sap@users.noreply.github.com> Date: Sat, 17 Aug 2024 19:19:18 +0200 Subject: [PATCH 2/4] docs: adhere to style guide for Azure SAS support (#13376) Signed-off-by: Kavish Nareshchandra Dahekar Signed-off-by: Kavish Dahekar <160044920+kavishdahekar-sap@users.noreply.github.com> Signed-off-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> Co-authored-by: Anton Gilgur <4970083+agilgur5@users.noreply.github.com> --- docs/configure-artifact-repository.md | 68 +++++++++++++++------------ 1 file changed, 37 insertions(+), 31 deletions(-) diff --git a/docs/configure-artifact-repository.md b/docs/configure-artifact-repository.md index 94460203f93f..2261015a5abd 100644 --- a/docs/configure-artifact-repository.md +++ b/docs/configure-artifact-repository.md @@ -340,16 +340,20 @@ data: ## Configuring Azure Blob Storage -Create an Azure Storage account and a container within that account. There are several -ways to accomplish this, including the [Azure Portal](https://portal.azure.com) or the -[CLI](https://docs.microsoft.com/en-us/cli/azure/). +Create an Azure Storage account and a container within your account. +You can use the [Azure Portal](https://portal.azure.com), the [CLI](https://docs.microsoft.com/en-us/cli/azure/), or other tools. -There are multiple ways to allow Argo to authenticate its access to your Azure storage account. -The preferred method is via [Azure managed identities](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity). -If a managed identity has been assigned to the machines running the workflow then `useSDKCreds` can be set to true in the workflow yaml. -If `useSDKCreds` is set to `true`, then the `accountKeySecret` value is not -used and authentication with Azure will be attempted using -[`DefaultAzureCredential`](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication). +You can authenticate Argo to your Azure storage account in multiple ways: + +- [Managed Identities](#using-azure-managed-identities) +- [Access Keys](#using-azure-access-keys) +- [Shared Access Signatures (SAS)](#using-azure-shared-access-signatures-sas) + +### Using Azure Managed Identities + +[Azure Managed Identities](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) is the preferred method for managing access to Azure resources securely. +You can set `useSDKCreds: true` if a Managed Identity is assigned. +In this case, the `accountKeySecret` is not used and authentication uses [`DefaultAzureCredential`](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication). ```yaml artifacts: @@ -360,36 +364,37 @@ artifacts: container: my-container-name blob: path/in/container # If a managed identity has been assigned to the machines running the - # workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) + # workflow (for example, https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) # then useSDKCreds should be set to true. The accountKeySecret is not required # and will not be used in this case. useSDKCreds: true ``` -In addition to managed identities, Argo workflows also support authentication using access keys and SAS tokens. +### Using Azure Access Keys -### Using Azure access keys +You can also use an [Access Key](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal). -1. Retrieve the blob service endpoint for the storage account. For example: +1. Retrieve the blob service endpoint for the storage account: ```bash az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv + # https://mystorageaccountname.blob.core.windows.net ``` -2. Retrieve the access key for the storage account. For example: +2. Retrieve the Access Key for the storage account: ```bash - az storage account keys list -n mystorageaccountname --query '[0].value' -otsv + ACCESS_KEY="$(az storage account keys list -n mystorageaccountname --query '[0].value' -otsv)" ``` -3. Create a Kubernetes secret to hold the storage account key. For example: +3. Create a Kubernetes Secret to hold the storage account key: ```bash kubectl create secret generic my-azure-storage-credentials \ - --from-literal "account-access-key=$(az storage account keys list -n mystorageaccountname --query '[0].value' -otsv)" + --from-literal "account-access-key=$ACCESS_KEY" ``` -4. Configure `azure` artifact as follows in the yaml. +4. Configure an `azure` artifact: ```yaml artifacts: @@ -400,7 +405,7 @@ In addition to managed identities, Argo workflows also support authentication us container: my-container-name blob: path/in/container # accountKeySecret is a secret selector. - # It references the k8s secret named 'my-azure-storage-credentials'. + # It references the Kubernetes Secret named 'my-azure-storage-credentials'. # This secret is expected to have the key 'account-access-key', # containing the base64 encoded credentials to the storage account. accountKeySecret: @@ -410,30 +415,31 @@ In addition to managed identities, Argo workflows also support authentication us ### Using Azure Shared Access Signatures (SAS) -If you do not wish to use an access key, you may also use a [shared access signature (SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json). -Create an Azure Storage account and a container within that account. -There are several ways to accomplish this, including the [Azure Portal](https://portal.azure.com) or the [CLI](https://docs.microsoft.com/en-us/cli/azure/). +> v3.6 and after + +You can also use a [Shared Access Signature (SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json). -1. Retrieve the blob service endpoint for the storage account. For example: +1. Retrieve the blob service endpoint for the storage account: ```bash az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv + # https://mystorageaccountname.blob.core.windows.net ``` -2. Retrieve the shared access signature for the storage account. For example: +2. Generate a Shared Access Signature for the storage account: ```bash - az storage container generate-sas --account-name --name --permissions acdlrw --expiry --auth-mode key + SAS_TOKEN="$(az storage container generate-sas --account-name --name --permissions acdlrw --expiry --auth-mode key)" ``` -3. Create a Kubernetes secret to hold the storage account key. For example: +3. Create a Kubernetes Secret to hold the storage account key: ```bash kubectl create secret generic my-azure-storage-credentials \ - --from-literal "sas=$(az storage container generate-sas --account-name --name --permissions acdlrw --expiry --auth-mode key)" + --from-literal "shared-access-key=$SAS_TOKEN" ``` -4. Configure `azure` artifact as follows in the yaml. +4. Configure an `azure` artifact: ```yaml artifacts: @@ -444,12 +450,12 @@ There are several ways to accomplish this, including the [Azure Portal](https:// container: my-container-name blob: path/in/container # accountKeySecret is a secret selector. - # It references the k8s secret named 'my-azure-storage-credentials'. - # This secret is expected to have the key 'sas', + # It references the Kubernetes Secret named 'my-azure-storage-credentials'. + # This secret is expected to have the key 'shared-access-key', # containing the base64 encoded shared access signature to the storage account. accountKeySecret: name: my-azure-storage-credentials - key: sas + key: shared-access-key ``` ## Configure the Default Artifact Repository From bc22fb5bbb8b47756557e405fce69167d796d030 Mon Sep 17 00:00:00 2001 From: Alan Clucas Date: Sat, 17 Aug 2024 18:22:46 +0100 Subject: [PATCH 3/4] chore(deps): bump `golangci-lint` from 1.55.1 to 1.59.1 (#13473) Signed-off-by: Alan Clucas --- Makefile | 2 +- cmd/argoexec/commands/emissary_test.go | 2 +- server/auth/sso/sso.go | 4 ++-- server/workflow/workflow_server_test.go | 6 +++--- workflow/artifacts/http/http_test.go | 5 +++-- 5 files changed, 10 insertions(+), 9 deletions(-) diff --git a/Makefile b/Makefile index 3a6e3471ea36..f9ae511426d3 100644 --- a/Makefile +++ b/Makefile @@ -434,7 +434,7 @@ dist/manifests/%: manifests/% # lint/test/etc $(GOPATH)/bin/golangci-lint: - curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b `go env GOPATH`/bin v1.55.1 + curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b `go env GOPATH`/bin v1.59.1 .PHONY: lint lint: server/static/files.go $(GOPATH)/bin/golangci-lint diff --git a/cmd/argoexec/commands/emissary_test.go b/cmd/argoexec/commands/emissary_test.go index 946d728a529e..3313e0e789f4 100644 --- a/cmd/argoexec/commands/emissary_test.go +++ b/cmd/argoexec/commands/emissary_test.go @@ -75,7 +75,7 @@ func TestEmissary(t *testing.T) { go func() { defer wg.Done() err := run("sleep 3") - require.EqualError(t, err, fmt.Sprintf("exit status %d", 128+signal)) + assert.EqualError(t, err, fmt.Sprintf("exit status %d", 128+signal)) }() wg.Wait() } diff --git a/server/auth/sso/sso.go b/server/auth/sso/sso.go index 743990d9f517..386d11d450f8 100644 --- a/server/auth/sso/sso.go +++ b/server/auth/sso/sso.go @@ -185,7 +185,7 @@ func newSso( } var filterGroupsRegex []*regexp.Regexp - if c.FilterGroupsRegex != nil && len(c.FilterGroupsRegex) > 0 { + if len(c.FilterGroupsRegex) > 0 { for _, regex := range c.FilterGroupsRegex { compiledRegex, err := regexp.Compile(regex) if err != nil { @@ -297,7 +297,7 @@ func (s *sso) HandleCallback(w http.ResponseWriter, r *http.Request) { } // only return groups that match at least one of the regexes - if s.filterGroupsRegex != nil && len(s.filterGroupsRegex) > 0 { + if len(s.filterGroupsRegex) > 0 { var filteredGroups []string for _, group := range groups { for _, regex := range s.filterGroupsRegex { diff --git a/server/workflow/workflow_server_test.go b/server/workflow/workflow_server_test.go index a6a6d110a315..366ed020b5d1 100644 --- a/server/workflow/workflow_server_test.go +++ b/server/workflow/workflow_server_test.go @@ -690,7 +690,7 @@ func TestWatchWorkflows(t *testing.T) { ctx, cancel := context.WithCancel(ctx) go func() { err := server.WatchWorkflows(&workflowpkg.WatchWorkflowsRequest{}, &testWatchWorkflowServer{testServerStream{ctx}}) - require.NoError(t, err) + assert.NoError(t, err) }() cancel() } @@ -708,7 +708,7 @@ func TestWatchLatestWorkflow(t *testing.T) { FieldSelector: util.GenerateFieldSelectorFromWorkflowName("@latest"), }, }, &testWatchWorkflowServer{testServerStream{ctx}}) - require.NoError(t, err) + assert.NoError(t, err) }() cancel() } @@ -912,7 +912,7 @@ func TestPodLogs(t *testing.T) { Namespace: "workflows", LogOptions: &corev1.PodLogOptions{}, }, &testPodLogsServer{testServerStream{ctx}}) - require.NoError(t, err) + assert.NoError(t, err) }() cancel() } diff --git a/workflow/artifacts/http/http_test.go b/workflow/artifacts/http/http_test.go index 52e3502e715f..2bf1ebd2bc37 100644 --- a/workflow/artifacts/http/http_test.go +++ b/workflow/artifacts/http/http_test.go @@ -104,8 +104,9 @@ func TestSaveHTTPArtifactRedirect(t *testing.T) { // check that content is really there buf := new(bytes.Buffer) _, err = buf.ReadFrom(r.Body) - require.NoError(t, err) - assert.Equal(t, content, buf.String()) + if assert.NoError(t, err) { + assert.Equal(t, content, buf.String()) + } w.WriteHeader(http.StatusCreated) } From 472885635bce5f5c93238603a885feda2350ed1f Mon Sep 17 00:00:00 2001 From: Alan Clucas Date: Mon, 19 Aug 2024 05:25:10 +0100 Subject: [PATCH 4/4] docs: document `id` in node field selector (#13463) Signed-off-by: Alan Clucas --- docs/node-field-selector.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/node-field-selector.md b/docs/node-field-selector.md index b1f11296b978..27df52339cfe 100644 --- a/docs/node-field-selector.md +++ b/docs/node-field-selector.md @@ -23,6 +23,7 @@ The field can be any of: | Field | Description| |----------|------------| | `displayName`| Display name of the node. This is the name of the node as it is displayed on the CLI or UI, without considering its ancestors (see example below). This is a useful shortcut if there is only one node with the same `displayName` | +| `id` | ID of the node, a unique identifier for the node which can be discovered by reading the status of the workflow. | | `name`| Full name of the node. This is the full name of the node, including its ancestors (see example below). Using `name` is necessary when two or more nodes share the same `displayName` and disambiguation is required. | | `templateName`| Template name of the node | | `phase`| Phase status of the node - e.g. Running |