Skip to content

Commit

Permalink
Merge branch 'main' into opentelemetry-podphase
Browse files Browse the repository at this point in the history
  • Loading branch information
Joibel authored Aug 19, 2024
2 parents e90ca47 + 4728856 commit d379834
Show file tree
Hide file tree
Showing 8 changed files with 48 additions and 47 deletions.
7 changes: 0 additions & 7 deletions .snyk

This file was deleted.

2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ dist/manifests/%: manifests/%
# lint/test/etc

$(GOPATH)/bin/golangci-lint:
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b `go env GOPATH`/bin v1.55.1
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b `go env GOPATH`/bin v1.59.1

.PHONY: lint
lint: server/static/files.go $(GOPATH)/bin/golangci-lint
Expand Down
2 changes: 1 addition & 1 deletion cmd/argoexec/commands/emissary_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ func TestEmissary(t *testing.T) {
go func() {
defer wg.Done()
err := run("sleep 3")
require.EqualError(t, err, fmt.Sprintf("exit status %d", 128+signal))
assert.EqualError(t, err, fmt.Sprintf("exit status %d", 128+signal))
}()
wg.Wait()
}
Expand Down
68 changes: 37 additions & 31 deletions docs/configure-artifact-repository.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,16 +340,20 @@ data:
## Configuring Azure Blob Storage
Create an Azure Storage account and a container within that account. There are several
ways to accomplish this, including the [Azure Portal](https://portal.azure.com) or the
[CLI](https://docs.microsoft.com/en-us/cli/azure/).
Create an Azure Storage account and a container within your account.
You can use the [Azure Portal](https://portal.azure.com), the [CLI](https://docs.microsoft.com/en-us/cli/azure/), or other tools.
There are multiple ways to allow Argo to authenticate its access to your Azure storage account.
The preferred method is via [Azure managed identities](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity).
If a managed identity has been assigned to the machines running the workflow then `useSDKCreds` can be set to true in the workflow yaml.
If `useSDKCreds` is set to `true`, then the `accountKeySecret` value is not
used and authentication with Azure will be attempted using
[`DefaultAzureCredential`](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication).
You can authenticate Argo to your Azure storage account in multiple ways:
- [Managed Identities](#using-azure-managed-identities)
- [Access Keys](#using-azure-access-keys)
- [Shared Access Signatures (SAS)](#using-azure-shared-access-signatures-sas)
### Using Azure Managed Identities
[Azure Managed Identities](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) is the preferred method for managing access to Azure resources securely.
You can set `useSDKCreds: true` if a Managed Identity is assigned.
In this case, the `accountKeySecret` is not used and authentication uses [`DefaultAzureCredential`](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication).

```yaml
artifacts:
Expand All @@ -360,36 +364,37 @@ artifacts:
container: my-container-name
blob: path/in/container
# If a managed identity has been assigned to the machines running the
# workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity)
# workflow (for example, https://docs.microsoft.com/en-us/azure/aks/use-managed-identity)
# then useSDKCreds should be set to true. The accountKeySecret is not required
# and will not be used in this case.
useSDKCreds: true
```

In addition to managed identities, Argo workflows also support authentication using access keys and SAS tokens.
### Using Azure Access Keys

### Using Azure access keys
You can also use an [Access Key](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal).

1. Retrieve the blob service endpoint for the storage account. For example:
1. Retrieve the blob service endpoint for the storage account:

```bash
az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv
# https://mystorageaccountname.blob.core.windows.net
```

2. Retrieve the access key for the storage account. For example:
2. Retrieve the Access Key for the storage account:

```bash
az storage account keys list -n mystorageaccountname --query '[0].value' -otsv
ACCESS_KEY="$(az storage account keys list -n mystorageaccountname --query '[0].value' -otsv)"
```

3. Create a Kubernetes secret to hold the storage account key. For example:
3. Create a Kubernetes Secret to hold the storage account key:

```bash
kubectl create secret generic my-azure-storage-credentials \
--from-literal "account-access-key=$(az storage account keys list -n mystorageaccountname --query '[0].value' -otsv)"
--from-literal "account-access-key=$ACCESS_KEY"
```

4. Configure `azure` artifact as follows in the yaml.
4. Configure an `azure` artifact:

```yaml
artifacts:
Expand All @@ -400,7 +405,7 @@ In addition to managed identities, Argo workflows also support authentication us
container: my-container-name
blob: path/in/container
# accountKeySecret is a secret selector.
# It references the k8s secret named 'my-azure-storage-credentials'.
# It references the Kubernetes Secret named 'my-azure-storage-credentials'.
# This secret is expected to have the key 'account-access-key',
# containing the base64 encoded credentials to the storage account.
accountKeySecret:
Expand All @@ -410,30 +415,31 @@ In addition to managed identities, Argo workflows also support authentication us

### Using Azure Shared Access Signatures (SAS)

If you do not wish to use an access key, you may also use a [shared access signature (SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json).
Create an Azure Storage account and a container within that account.
There are several ways to accomplish this, including the [Azure Portal](https://portal.azure.com) or the [CLI](https://docs.microsoft.com/en-us/cli/azure/).
> v3.6 and after

You can also use a [Shared Access Signature (SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json).

1. Retrieve the blob service endpoint for the storage account. For example:
1. Retrieve the blob service endpoint for the storage account:

```bash
az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv
# https://mystorageaccountname.blob.core.windows.net
```

2. Retrieve the shared access signature for the storage account. For example:
2. Generate a Shared Access Signature for the storage account:

```bash
az storage container generate-sas --account-name <storage-account> --name <container> --permissions acdlrw --expiry <date-time> --auth-mode key
SAS_TOKEN="$(az storage container generate-sas --account-name <storage-account> --name <container> --permissions acdlrw --expiry <date-time> --auth-mode key)"
```

3. Create a Kubernetes secret to hold the storage account key. For example:
3. Create a Kubernetes Secret to hold the storage account key:

```bash
kubectl create secret generic my-azure-storage-credentials \
--from-literal "sas=$(az storage container generate-sas --account-name <storage-account> --name <container> --permissions acdlrw --expiry <date-time> --auth-mode key)"
--from-literal "shared-access-key=$SAS_TOKEN"
```

4. Configure `azure` artifact as follows in the yaml.
4. Configure an `azure` artifact:

```yaml
artifacts:
Expand All @@ -444,12 +450,12 @@ There are several ways to accomplish this, including the [Azure Portal](https://
container: my-container-name
blob: path/in/container
# accountKeySecret is a secret selector.
# It references the k8s secret named 'my-azure-storage-credentials'.
# This secret is expected to have the key 'sas',
# It references the Kubernetes Secret named 'my-azure-storage-credentials'.
# This secret is expected to have the key 'shared-access-key',
# containing the base64 encoded shared access signature to the storage account.
accountKeySecret:
name: my-azure-storage-credentials
key: sas
key: shared-access-key
```

## Configure the Default Artifact Repository
Expand Down
1 change: 1 addition & 0 deletions docs/node-field-selector.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ The field can be any of:
| Field | Description|
|----------|------------|
| `displayName`| Display name of the node. This is the name of the node as it is displayed on the CLI or UI, without considering its ancestors (see example below). This is a useful shortcut if there is only one node with the same `displayName` |
| `id` | ID of the node, a unique identifier for the node which can be discovered by reading the status of the workflow. |
| `name`| Full name of the node. This is the full name of the node, including its ancestors (see example below). Using `name` is necessary when two or more nodes share the same `displayName` and disambiguation is required. |
| `templateName`| Template name of the node |
| `phase`| Phase status of the node - e.g. Running |
Expand Down
4 changes: 2 additions & 2 deletions server/auth/sso/sso.go
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ func newSso(
}

var filterGroupsRegex []*regexp.Regexp
if c.FilterGroupsRegex != nil && len(c.FilterGroupsRegex) > 0 {
if len(c.FilterGroupsRegex) > 0 {
for _, regex := range c.FilterGroupsRegex {
compiledRegex, err := regexp.Compile(regex)
if err != nil {
Expand Down Expand Up @@ -297,7 +297,7 @@ func (s *sso) HandleCallback(w http.ResponseWriter, r *http.Request) {
}

// only return groups that match at least one of the regexes
if s.filterGroupsRegex != nil && len(s.filterGroupsRegex) > 0 {
if len(s.filterGroupsRegex) > 0 {
var filteredGroups []string
for _, group := range groups {
for _, regex := range s.filterGroupsRegex {
Expand Down
6 changes: 3 additions & 3 deletions server/workflow/workflow_server_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -690,7 +690,7 @@ func TestWatchWorkflows(t *testing.T) {
ctx, cancel := context.WithCancel(ctx)
go func() {
err := server.WatchWorkflows(&workflowpkg.WatchWorkflowsRequest{}, &testWatchWorkflowServer{testServerStream{ctx}})
require.NoError(t, err)
assert.NoError(t, err)
}()
cancel()
}
Expand All @@ -708,7 +708,7 @@ func TestWatchLatestWorkflow(t *testing.T) {
FieldSelector: util.GenerateFieldSelectorFromWorkflowName("@latest"),
},
}, &testWatchWorkflowServer{testServerStream{ctx}})
require.NoError(t, err)
assert.NoError(t, err)
}()
cancel()
}
Expand Down Expand Up @@ -912,7 +912,7 @@ func TestPodLogs(t *testing.T) {
Namespace: "workflows",
LogOptions: &corev1.PodLogOptions{},
}, &testPodLogsServer{testServerStream{ctx}})
require.NoError(t, err)
assert.NoError(t, err)
}()
cancel()
}
Expand Down
5 changes: 3 additions & 2 deletions workflow/artifacts/http/http_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,9 @@ func TestSaveHTTPArtifactRedirect(t *testing.T) {
// check that content is really there
buf := new(bytes.Buffer)
_, err = buf.ReadFrom(r.Body)
require.NoError(t, err)
assert.Equal(t, content, buf.String())
if assert.NoError(t, err) {
assert.Equal(t, content, buf.String())
}

w.WriteHeader(http.StatusCreated)
}
Expand Down

0 comments on commit d379834

Please sign in to comment.